Jan 17 00:19:10.072052 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:19:10.072068 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:10.072077 kernel: BIOS-provided physical RAM map: Jan 17 00:19:10.072082 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:19:10.072086 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 17 00:19:10.072090 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 17 00:19:10.072095 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 17 00:19:10.072100 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 17 00:19:10.072111 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 17 00:19:10.072115 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 17 00:19:10.072119 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 17 00:19:10.072126 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 17 00:19:10.072130 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 17 00:19:10.072135 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 17 00:19:10.072140 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 17 00:19:10.072145 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:19:10.072152 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 17 00:19:10.072157 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 17 00:19:10.072161 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 00:19:10.072166 kernel: NX (Execute Disable) protection: active Jan 17 00:19:10.072170 kernel: APIC: Static calls initialized Jan 17 00:19:10.072175 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 17 00:19:10.072179 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 Jan 17 00:19:10.072184 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 17 00:19:10.072189 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 17 00:19:10.072193 kernel: SMBIOS 3.0.0 present. Jan 17 00:19:10.072198 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 17 00:19:10.072203 kernel: Hypervisor detected: KVM Jan 17 00:19:10.072210 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:19:10.072214 kernel: kvm-clock: using sched offset of 12596251853 cycles Jan 17 00:19:10.072219 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:19:10.072224 kernel: tsc: Detected 2399.996 MHz processor Jan 17 00:19:10.072229 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:19:10.072234 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:19:10.072238 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 17 00:19:10.072243 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:19:10.072248 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:19:10.072255 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 17 00:19:10.072260 kernel: Using GB pages for direct mapping Jan 17 00:19:10.072264 kernel: Secure boot disabled Jan 17 00:19:10.072273 kernel: ACPI: Early table checksum verification disabled Jan 17 00:19:10.072278 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 17 00:19:10.072283 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:19:10.072288 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:10.072295 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:10.072300 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 17 00:19:10.072305 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:10.072310 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:10.072315 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:10.072319 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:10.072324 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:19:10.072332 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 17 00:19:10.072336 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 17 00:19:10.072342 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 17 00:19:10.072347 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 17 00:19:10.072352 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 17 00:19:10.072357 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 17 00:19:10.072362 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 17 00:19:10.072366 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 17 00:19:10.072371 kernel: No NUMA configuration found Jan 17 00:19:10.072379 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 17 00:19:10.072384 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Jan 17 00:19:10.072389 kernel: Zone ranges: Jan 17 00:19:10.072394 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:19:10.072399 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:19:10.072403 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 17 00:19:10.072408 kernel: Movable zone start for each node Jan 17 00:19:10.072413 kernel: Early memory node ranges Jan 17 00:19:10.072418 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:19:10.072423 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 17 00:19:10.072430 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 17 00:19:10.072435 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 17 00:19:10.072440 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 17 00:19:10.072445 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 17 00:19:10.072450 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:19:10.072455 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:19:10.072460 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 17 00:19:10.072465 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:19:10.072470 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 17 00:19:10.072477 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 17 00:19:10.072482 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:19:10.072487 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:19:10.072492 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:19:10.072497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:19:10.072502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:19:10.072507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:19:10.072512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:19:10.072517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:19:10.072524 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:19:10.072529 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:19:10.072534 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:19:10.072539 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:19:10.072543 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 17 00:19:10.072548 kernel: Booting paravirtualized kernel on KVM Jan 17 00:19:10.072553 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:19:10.072558 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:19:10.072563 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:19:10.072571 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:19:10.072576 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:19:10.072581 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:19:10.072586 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:10.072592 kernel: random: crng init done Jan 17 00:19:10.072597 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:19:10.072602 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:19:10.072607 kernel: Fallback order for Node 0: 0 Jan 17 00:19:10.072614 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 17 00:19:10.072619 kernel: Policy zone: Normal Jan 17 00:19:10.072624 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:19:10.072629 kernel: software IO TLB: area num 2. Jan 17 00:19:10.072634 kernel: Memory: 3827772K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 263192K reserved, 0K cma-reserved) Jan 17 00:19:10.072639 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:19:10.072644 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:19:10.072648 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:19:10.072653 kernel: Dynamic Preempt: voluntary Jan 17 00:19:10.072660 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:19:10.072666 kernel: rcu: RCU event tracing is enabled. Jan 17 00:19:10.072671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:19:10.072677 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:19:10.072688 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:19:10.072697 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:19:10.072702 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:19:10.072708 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:19:10.072713 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:19:10.072718 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:19:10.072723 kernel: Console: colour dummy device 80x25 Jan 17 00:19:10.072728 kernel: printk: console [tty0] enabled Jan 17 00:19:10.072736 kernel: printk: console [ttyS0] enabled Jan 17 00:19:10.072741 kernel: ACPI: Core revision 20230628 Jan 17 00:19:10.072746 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:19:10.072752 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:19:10.072757 kernel: x2apic enabled Jan 17 00:19:10.072764 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:19:10.072770 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:19:10.072775 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:19:10.072780 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399996) Jan 17 00:19:10.072785 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:19:10.072790 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:19:10.072795 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:19:10.072801 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:19:10.072806 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 00:19:10.072813 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:19:10.072818 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:19:10.072824 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:19:10.072829 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 17 00:19:10.072834 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:19:10.072839 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:19:10.072844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:19:10.072850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:19:10.072869 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:19:10.072877 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:19:10.072882 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:19:10.072887 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:19:10.072892 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:19:10.072897 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:19:10.072902 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:19:10.072908 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:19:10.072913 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:19:10.072918 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 17 00:19:10.072926 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 17 00:19:10.072931 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:19:10.072936 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:19:10.072941 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:19:10.072947 kernel: landlock: Up and running. Jan 17 00:19:10.072952 kernel: SELinux: Initializing. Jan 17 00:19:10.072957 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:19:10.072962 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:19:10.072967 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 17 00:19:10.072975 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:10.072980 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:10.072985 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:10.072991 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 00:19:10.072996 kernel: ... version: 0 Jan 17 00:19:10.073001 kernel: ... bit width: 48 Jan 17 00:19:10.073006 kernel: ... generic registers: 6 Jan 17 00:19:10.073011 kernel: ... value mask: 0000ffffffffffff Jan 17 00:19:10.073016 kernel: ... max period: 00007fffffffffff Jan 17 00:19:10.073043 kernel: ... fixed-purpose events: 0 Jan 17 00:19:10.073049 kernel: ... event mask: 000000000000003f Jan 17 00:19:10.073054 kernel: signal: max sigframe size: 3376 Jan 17 00:19:10.073059 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:19:10.073064 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:19:10.073069 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:19:10.073074 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:19:10.073079 kernel: .... node #0, CPUs: #1 Jan 17 00:19:10.073085 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:19:10.073092 kernel: smpboot: Max logical packages: 1 Jan 17 00:19:10.073098 kernel: smpboot: Total of 2 processors activated (9599.98 BogoMIPS) Jan 17 00:19:10.073103 kernel: devtmpfs: initialized Jan 17 00:19:10.073114 kernel: x86/mm: Memory block size: 128MB Jan 17 00:19:10.073119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 17 00:19:10.073125 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:19:10.073130 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:19:10.073135 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:19:10.073140 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:19:10.073148 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:19:10.073153 kernel: audit: type=2000 audit(1768609148.903:1): state=initialized audit_enabled=0 res=1 Jan 17 00:19:10.073158 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:19:10.073163 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:19:10.073168 kernel: cpuidle: using governor menu Jan 17 00:19:10.073174 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:19:10.073179 kernel: dca service started, version 1.12.1 Jan 17 00:19:10.073184 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 17 00:19:10.073189 kernel: PCI: Using configuration type 1 for base access Jan 17 00:19:10.073197 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:19:10.073202 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:19:10.073207 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:19:10.073213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:19:10.073218 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:19:10.073223 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:19:10.073228 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:19:10.073233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:19:10.073238 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:19:10.073246 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:19:10.073251 kernel: ACPI: Interpreter enabled Jan 17 00:19:10.073256 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:19:10.073261 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:19:10.073267 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:19:10.073272 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:19:10.073277 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:19:10.073282 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:19:10.073436 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:19:10.073544 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:19:10.073641 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:19:10.073647 kernel: PCI host bridge to bus 0000:00 Jan 17 00:19:10.073747 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:19:10.073836 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:19:10.073946 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:19:10.074039 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 17 00:19:10.074132 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 17 00:19:10.074220 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 17 00:19:10.074306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:19:10.074415 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:19:10.074519 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:19:10.074662 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 17 00:19:10.074886 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 17 00:19:10.075023 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 17 00:19:10.075132 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:19:10.075231 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:19:10.075327 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:19:10.075432 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.075533 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 17 00:19:10.075635 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.075731 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 17 00:19:10.075833 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.075967 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 17 00:19:10.076071 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.076179 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 17 00:19:10.076280 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.076375 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 17 00:19:10.076492 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.076588 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 17 00:19:10.076692 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.076787 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 17 00:19:10.076916 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.077012 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 17 00:19:10.077121 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:19:10.077219 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 17 00:19:10.077318 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:19:10.077413 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:19:10.077516 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:19:10.077610 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 17 00:19:10.077703 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 17 00:19:10.077803 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:19:10.077922 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 17 00:19:10.078030 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:19:10.078141 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 17 00:19:10.078242 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 17 00:19:10.078346 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:19:10.078443 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:19:10.078538 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 17 00:19:10.078633 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:19:10.078741 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:19:10.078845 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 17 00:19:10.081005 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:19:10.081130 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 17 00:19:10.081247 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 00:19:10.081348 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 17 00:19:10.081447 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 17 00:19:10.081543 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:19:10.081641 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 17 00:19:10.081735 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:19:10.081839 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 00:19:10.081953 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 17 00:19:10.082049 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:19:10.082154 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:19:10.082261 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:19:10.082364 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 17 00:19:10.082462 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 17 00:19:10.082558 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:19:10.082654 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 17 00:19:10.082749 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:19:10.082863 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 00:19:10.082965 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 17 00:19:10.083067 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 17 00:19:10.083170 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:19:10.083265 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 17 00:19:10.083361 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:19:10.083367 kernel: acpiphp: Slot [0] registered Jan 17 00:19:10.083473 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:19:10.083573 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 17 00:19:10.083672 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 17 00:19:10.083775 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:19:10.085905 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:19:10.086016 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 17 00:19:10.086145 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:19:10.086152 kernel: acpiphp: Slot [0-2] registered Jan 17 00:19:10.086249 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:19:10.086343 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 17 00:19:10.086437 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:19:10.086447 kernel: acpiphp: Slot [0-3] registered Jan 17 00:19:10.086542 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:19:10.086638 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 17 00:19:10.086731 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:19:10.086737 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:19:10.086743 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:19:10.086748 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:19:10.086753 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:19:10.086761 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:19:10.086767 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:19:10.086772 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:19:10.086777 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:19:10.086782 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:19:10.086787 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:19:10.086793 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:19:10.086798 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:19:10.086803 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:19:10.086811 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:19:10.086816 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:19:10.086821 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:19:10.086826 kernel: iommu: Default domain type: Translated Jan 17 00:19:10.086831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:19:10.086836 kernel: efivars: Registered efivars operations Jan 17 00:19:10.086842 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:19:10.086847 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:19:10.088684 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 17 00:19:10.088739 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 17 00:19:10.088756 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 17 00:19:10.088769 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 17 00:19:10.089151 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:19:10.089407 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:19:10.089647 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:19:10.089663 kernel: vgaarb: loaded Jan 17 00:19:10.089677 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:19:10.089691 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:19:10.089713 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:19:10.089726 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:19:10.089740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:19:10.089753 kernel: pnp: PnP ACPI init Jan 17 00:19:10.090037 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 17 00:19:10.090058 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:19:10.090071 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:19:10.090085 kernel: NET: Registered PF_INET protocol family Jan 17 00:19:10.090154 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:19:10.090174 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:19:10.090187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:19:10.090201 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:19:10.090214 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:19:10.090228 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:19:10.090242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:19:10.090255 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:19:10.090269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:19:10.090289 kernel: NET: Registered PF_XDP protocol family Jan 17 00:19:10.090550 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 17 00:19:10.090803 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 17 00:19:10.091089 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:19:10.091347 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:19:10.091586 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:19:10.091820 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:19:10.093173 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:19:10.093438 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:19:10.093687 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 17 00:19:10.099513 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:19:10.099765 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 17 00:19:10.100102 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:19:10.100320 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:19:10.100516 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 17 00:19:10.100711 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:19:10.100922 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 17 00:19:10.101153 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:19:10.101349 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:19:10.101543 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:19:10.101744 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:19:10.103177 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 17 00:19:10.103384 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:19:10.103583 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:19:10.103767 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 17 00:19:10.105120 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:19:10.105336 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 17 00:19:10.105525 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:19:10.105725 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 17 00:19:10.106523 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 17 00:19:10.106711 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:19:10.108455 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:19:10.108656 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 17 00:19:10.108841 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 17 00:19:10.109083 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:19:10.109285 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:19:10.109470 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 17 00:19:10.109666 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 17 00:19:10.109849 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:19:10.112799 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:19:10.113022 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:19:10.113234 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:19:10.113404 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 17 00:19:10.113572 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 17 00:19:10.113741 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 17 00:19:10.114304 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 17 00:19:10.114492 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:19:10.114682 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 17 00:19:10.116066 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 17 00:19:10.116275 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:19:10.116466 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:19:10.116657 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 17 00:19:10.116845 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:19:10.118096 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 17 00:19:10.118301 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:19:10.118489 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 17 00:19:10.118666 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 17 00:19:10.118843 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:19:10.120149 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 17 00:19:10.120337 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 17 00:19:10.120516 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:19:10.120710 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 17 00:19:10.120944 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 17 00:19:10.121170 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:19:10.121185 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:19:10.121197 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:19:10.121207 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:19:10.121219 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 17 00:19:10.121230 kernel: Initialise system trusted keyrings Jan 17 00:19:10.121248 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:19:10.121259 kernel: Key type asymmetric registered Jan 17 00:19:10.121269 kernel: Asymmetric key parser 'x509' registered Jan 17 00:19:10.121280 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:19:10.121290 kernel: io scheduler mq-deadline registered Jan 17 00:19:10.121301 kernel: io scheduler kyber registered Jan 17 00:19:10.121311 kernel: io scheduler bfq registered Jan 17 00:19:10.121507 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 00:19:10.121668 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 00:19:10.121766 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 00:19:10.121877 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 00:19:10.121975 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 00:19:10.122071 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 00:19:10.122174 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 00:19:10.122269 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 00:19:10.122366 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 00:19:10.122461 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 00:19:10.122559 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 00:19:10.122656 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 00:19:10.122753 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 00:19:10.122850 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 00:19:10.122961 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 00:19:10.123057 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 00:19:10.123064 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:19:10.123165 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 17 00:19:10.123261 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 17 00:19:10.123270 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:19:10.123276 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 17 00:19:10.123282 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:19:10.123287 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:19:10.123293 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:19:10.123299 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:19:10.123304 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:19:10.123408 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:19:10.123419 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:19:10.123509 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:19:10.123601 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:19:09 UTC (1768609149) Jan 17 00:19:10.123691 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:19:10.123697 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:19:10.123704 kernel: efifb: probing for efifb Jan 17 00:19:10.123709 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 17 00:19:10.123715 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 17 00:19:10.123724 kernel: efifb: scrolling: redraw Jan 17 00:19:10.123730 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:19:10.123735 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:19:10.123741 kernel: fb0: EFI VGA frame buffer device Jan 17 00:19:10.123747 kernel: pstore: Using crash dump compression: deflate Jan 17 00:19:10.123752 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:19:10.123758 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:19:10.123764 kernel: Segment Routing with IPv6 Jan 17 00:19:10.123769 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:19:10.123778 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:19:10.123784 kernel: Key type dns_resolver registered Jan 17 00:19:10.123789 kernel: IPI shorthand broadcast: enabled Jan 17 00:19:10.123795 kernel: sched_clock: Marking stable (1420011504, 188207202)->(1641117872, -32899166) Jan 17 00:19:10.123800 kernel: registered taskstats version 1 Jan 17 00:19:10.123806 kernel: Loading compiled-in X.509 certificates Jan 17 00:19:10.123812 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:19:10.123817 kernel: Key type .fscrypt registered Jan 17 00:19:10.123823 kernel: Key type fscrypt-provisioning registered Jan 17 00:19:10.123831 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:19:10.123836 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:19:10.123842 kernel: ima: No architecture policies found Jan 17 00:19:10.123848 kernel: clk: Disabling unused clocks Jan 17 00:19:10.123875 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:19:10.123881 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:19:10.123886 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:19:10.123892 kernel: Run /init as init process Jan 17 00:19:10.123898 kernel: with arguments: Jan 17 00:19:10.123906 kernel: /init Jan 17 00:19:10.123911 kernel: with environment: Jan 17 00:19:10.123917 kernel: HOME=/ Jan 17 00:19:10.123923 kernel: TERM=linux Jan 17 00:19:10.123930 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:10.123938 systemd[1]: Detected virtualization kvm. Jan 17 00:19:10.123944 systemd[1]: Detected architecture x86-64. Jan 17 00:19:10.123953 systemd[1]: Running in initrd. Jan 17 00:19:10.123958 systemd[1]: No hostname configured, using default hostname. Jan 17 00:19:10.123964 systemd[1]: Hostname set to . Jan 17 00:19:10.123970 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:10.123976 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:19:10.123982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:10.123988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:10.123994 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:19:10.124003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:10.124012 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:19:10.124018 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:19:10.124025 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:19:10.124030 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:19:10.124036 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:10.124042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:10.124050 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:10.124056 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:10.124062 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:10.124068 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:10.124074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:10.124080 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:10.124085 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:19:10.124091 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:19:10.124100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:10.124111 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:10.124117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:10.124123 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:10.124129 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:19:10.124135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:10.124141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:19:10.124146 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:19:10.124152 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:10.124161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:10.124186 systemd-journald[189]: Collecting audit messages is disabled. Jan 17 00:19:10.124203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:10.124209 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:10.124217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:10.124223 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:19:10.124229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:10.124236 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:10.124245 systemd-journald[189]: Journal started Jan 17 00:19:10.124258 systemd-journald[189]: Runtime Journal (/run/log/journal/6bf3bcea570540fab8c8b36b631d73f5) is 8.0M, max 76.3M, 68.3M free. Jan 17 00:19:10.103960 systemd-modules-load[190]: Inserted module 'overlay' Jan 17 00:19:10.129896 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:19:10.135374 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:10.145163 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:19:10.145235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:10.150168 systemd-modules-load[190]: Inserted module 'br_netfilter' Jan 17 00:19:10.150871 kernel: Bridge firewalling registered Jan 17 00:19:10.151009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:10.153980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:10.155479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:10.156508 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:10.162169 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:19:10.165964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:10.166616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:10.175036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:10.177871 dracut-cmdline[215]: dracut-dracut-053 Jan 17 00:19:10.179179 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:10.182527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:10.190291 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:10.212560 systemd-resolved[239]: Positive Trust Anchors: Jan 17 00:19:10.213182 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:10.213207 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:10.215654 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 17 00:19:10.218134 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:10.218585 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:10.242893 kernel: SCSI subsystem initialized Jan 17 00:19:10.250879 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:19:10.258886 kernel: iscsi: registered transport (tcp) Jan 17 00:19:10.276283 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:19:10.276358 kernel: QLogic iSCSI HBA Driver Jan 17 00:19:10.332872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:10.338972 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:19:10.401120 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:19:10.401175 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:19:10.403898 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:19:10.454890 kernel: raid6: avx512x4 gen() 24174 MB/s Jan 17 00:19:10.472881 kernel: raid6: avx512x2 gen() 32140 MB/s Jan 17 00:19:10.490876 kernel: raid6: avx512x1 gen() 43965 MB/s Jan 17 00:19:10.508875 kernel: raid6: avx2x4 gen() 48596 MB/s Jan 17 00:19:10.526881 kernel: raid6: avx2x2 gen() 50222 MB/s Jan 17 00:19:10.545618 kernel: raid6: avx2x1 gen() 40677 MB/s Jan 17 00:19:10.545690 kernel: raid6: using algorithm avx2x2 gen() 50222 MB/s Jan 17 00:19:10.564711 kernel: raid6: .... xor() 36966 MB/s, rmw enabled Jan 17 00:19:10.564772 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:19:10.580898 kernel: xor: automatically using best checksumming function avx Jan 17 00:19:10.688911 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:19:10.706881 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:10.715127 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:10.725700 systemd-udevd[407]: Using default interface naming scheme 'v255'. Jan 17 00:19:10.729576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:10.738077 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:19:10.756436 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 17 00:19:10.799330 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:10.803028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:10.875063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:10.885211 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:19:10.927381 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:10.929766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:10.931918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:10.933291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:10.940151 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:19:10.962473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:10.966265 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:19:10.969906 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:19:10.981875 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 00:19:11.006924 kernel: ACPI: bus type USB registered Jan 17 00:19:11.011021 kernel: usbcore: registered new interface driver usbfs Jan 17 00:19:11.013357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:11.016876 kernel: usbcore: registered new interface driver hub Jan 17 00:19:11.013474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:11.014870 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:11.015209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:11.015381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:11.015713 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:11.027399 kernel: usbcore: registered new device driver usb Jan 17 00:19:11.026377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:11.030937 kernel: libata version 3.00 loaded. Jan 17 00:19:11.031570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:11.032042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:11.044872 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:19:11.043262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:11.056913 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:19:11.061411 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:19:11.061573 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:19:11.064542 kernel: AES CTR mode by8 optimization enabled Jan 17 00:19:11.064869 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:19:11.068070 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:19:11.068227 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:19:11.069249 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:11.073535 kernel: hub 1-0:1.0: USB hub found Jan 17 00:19:11.073711 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:19:11.078081 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:19:11.079383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:11.080882 kernel: hub 2-0:1.0: USB hub found Jan 17 00:19:11.082376 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:19:11.084876 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:19:11.089917 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:19:11.093651 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:19:11.093814 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:19:11.101890 kernel: scsi host1: ahci Jan 17 00:19:11.104769 kernel: scsi host2: ahci Jan 17 00:19:11.106516 kernel: scsi host3: ahci Jan 17 00:19:11.110871 kernel: scsi host4: ahci Jan 17 00:19:11.111028 kernel: scsi host5: ahci Jan 17 00:19:11.113167 kernel: scsi host6: ahci Jan 17 00:19:11.113198 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 17 00:19:11.115021 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 Jan 17 00:19:11.117093 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 17 00:19:11.117269 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 Jan 17 00:19:11.119612 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:19:11.119771 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 Jan 17 00:19:11.119779 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 17 00:19:11.119920 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 Jan 17 00:19:11.121867 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:19:11.122019 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 Jan 17 00:19:11.127386 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:19:11.127429 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 Jan 17 00:19:11.127446 kernel: GPT:17805311 != 160006143 Jan 17 00:19:11.137828 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:19:11.137862 kernel: GPT:17805311 != 160006143 Jan 17 00:19:11.137870 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:19:11.137878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:19:11.139218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:11.143878 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:19:11.319173 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:19:11.452468 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:19:11.452568 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:19:11.452591 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:19:11.456925 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 00:19:11.465920 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:19:11.465969 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:19:11.472850 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:19:11.472928 kernel: ata1.00: applying bridge limits Jan 17 00:19:11.476912 kernel: ata1.00: configured for UDMA/100 Jan 17 00:19:11.483589 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:19:11.483726 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:19:11.534884 kernel: usbcore: registered new interface driver usbhid Jan 17 00:19:11.534956 kernel: usbhid: USB HID core driver Jan 17 00:19:11.560045 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 00:19:11.560142 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 00:19:11.579375 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:19:11.579800 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:19:11.592940 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (454) Jan 17 00:19:11.602949 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:19:11.614912 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (458) Jan 17 00:19:11.620779 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 00:19:11.634704 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 00:19:11.638397 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 00:19:11.639108 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 00:19:11.642997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:19:11.648099 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:19:11.654783 disk-uuid[581]: Primary Header is updated. Jan 17 00:19:11.654783 disk-uuid[581]: Secondary Entries is updated. Jan 17 00:19:11.654783 disk-uuid[581]: Secondary Header is updated. Jan 17 00:19:11.659876 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:19:11.665939 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:19:11.670935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:19:12.678916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:19:12.679013 disk-uuid[582]: The operation has completed successfully. Jan 17 00:19:12.763812 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:19:12.764263 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:19:12.790170 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:19:12.814512 sh[602]: Success Jan 17 00:19:12.840980 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:19:12.921541 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:19:12.932020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:19:12.945119 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:19:12.968492 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:19:12.968563 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:12.973411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:19:12.979151 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:19:12.983559 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:19:12.999894 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:19:13.003269 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:19:13.006207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:19:13.013212 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:19:13.018423 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:19:13.042447 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:13.042501 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:13.050909 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:19:13.063991 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:19:13.064043 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:19:13.092992 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:13.093227 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:19:13.104667 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:19:13.113214 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:19:13.221173 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:13.228931 ignition[706]: Ignition 2.19.0 Jan 17 00:19:13.229076 ignition[706]: Stage: fetch-offline Jan 17 00:19:13.229995 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:13.229110 ignition[706]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:13.229119 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:13.229209 ignition[706]: parsed url from cmdline: "" Jan 17 00:19:13.229213 ignition[706]: no config URL provided Jan 17 00:19:13.229217 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:19:13.229225 ignition[706]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:19:13.229230 ignition[706]: failed to fetch config: resource requires networking Jan 17 00:19:13.229358 ignition[706]: Ignition finished successfully Jan 17 00:19:13.249131 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:13.280209 systemd-networkd[787]: lo: Link UP Jan 17 00:19:13.280227 systemd-networkd[787]: lo: Gained carrier Jan 17 00:19:13.284579 systemd-networkd[787]: Enumeration completed Jan 17 00:19:13.285419 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:13.285427 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:13.285982 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:13.287131 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:13.287160 systemd-networkd[787]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:13.287971 systemd[1]: Reached target network.target - Network. Jan 17 00:19:13.289211 systemd-networkd[787]: eth0: Link UP Jan 17 00:19:13.289219 systemd-networkd[787]: eth0: Gained carrier Jan 17 00:19:13.289232 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:13.294294 systemd-networkd[787]: eth1: Link UP Jan 17 00:19:13.294301 systemd-networkd[787]: eth1: Gained carrier Jan 17 00:19:13.294313 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:13.298154 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:19:13.333962 systemd-networkd[787]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:19:13.335442 ignition[790]: Ignition 2.19.0 Jan 17 00:19:13.335456 ignition[790]: Stage: fetch Jan 17 00:19:13.335718 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:13.335739 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:13.335924 ignition[790]: parsed url from cmdline: "" Jan 17 00:19:13.335932 ignition[790]: no config URL provided Jan 17 00:19:13.335943 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:19:13.335962 ignition[790]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:19:13.335990 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 00:19:13.336287 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 00:19:13.369032 systemd-networkd[787]: eth0: DHCPv4 address 157.180.82.149/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:19:13.536568 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 00:19:13.544635 ignition[790]: GET result: OK Jan 17 00:19:13.544748 ignition[790]: parsing config with SHA512: 5b49117dd12bf7584bd68204b571469627c142a87e6c539b33084fccf1121b942d9004ee25c9583d733037e7fa1cfab0ba973cc7159d03bec9c53f78a8ab029f Jan 17 00:19:13.554713 unknown[790]: fetched base config from "system" Jan 17 00:19:13.554740 unknown[790]: fetched base config from "system" Jan 17 00:19:13.555732 ignition[790]: fetch: fetch complete Jan 17 00:19:13.554752 unknown[790]: fetched user config from "hetzner" Jan 17 00:19:13.555744 ignition[790]: fetch: fetch passed Jan 17 00:19:13.563450 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:19:13.555842 ignition[790]: Ignition finished successfully Jan 17 00:19:13.571191 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:19:13.599210 ignition[798]: Ignition 2.19.0 Jan 17 00:19:13.599249 ignition[798]: Stage: kargs Jan 17 00:19:13.599602 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:13.599624 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:13.603642 ignition[798]: kargs: kargs passed Jan 17 00:19:13.603734 ignition[798]: Ignition finished successfully Jan 17 00:19:13.610820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:19:13.618106 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:19:13.652162 ignition[805]: Ignition 2.19.0 Jan 17 00:19:13.652186 ignition[805]: Stage: disks Jan 17 00:19:13.652477 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:13.652502 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:13.653891 ignition[805]: disks: disks passed Jan 17 00:19:13.656810 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:19:13.653982 ignition[805]: Ignition finished successfully Jan 17 00:19:13.658822 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:13.660361 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:19:13.661997 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:13.663591 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:13.665073 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:13.677112 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:19:13.701389 systemd-fsck[814]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:19:13.705510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:19:13.715035 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:19:13.822926 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:19:13.824262 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:19:13.826118 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:13.833001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:13.837045 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:19:13.843559 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:19:13.847999 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:19:13.860704 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (822) Jan 17 00:19:13.860754 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:13.860775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:13.848970 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:13.870599 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:19:13.880077 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:19:13.880113 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:19:13.886829 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:19:13.891208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:13.902020 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:19:13.966593 coreos-metadata[824]: Jan 17 00:19:13.966 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 00:19:13.969340 coreos-metadata[824]: Jan 17 00:19:13.968 INFO Fetch successful Jan 17 00:19:13.970902 coreos-metadata[824]: Jan 17 00:19:13.970 INFO wrote hostname ci-4081-3-6-n-8c81c3eeb1 to /sysroot/etc/hostname Jan 17 00:19:13.974676 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:13.978537 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:19:13.986975 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:19:13.994462 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:19:14.001215 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:19:14.157804 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:14.170006 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:19:14.174123 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:19:14.185381 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:19:14.190226 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:14.229629 ignition[938]: INFO : Ignition 2.19.0 Jan 17 00:19:14.232040 ignition[938]: INFO : Stage: mount Jan 17 00:19:14.232040 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:14.232040 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:14.236836 ignition[938]: INFO : mount: mount passed Jan 17 00:19:14.236836 ignition[938]: INFO : Ignition finished successfully Jan 17 00:19:14.238709 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:19:14.239944 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:19:14.247984 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:19:14.275186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:14.303546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Jan 17 00:19:14.303614 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:14.309183 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:14.314033 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:19:14.329196 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:19:14.329280 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:19:14.333853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:14.372879 ignition[968]: INFO : Ignition 2.19.0 Jan 17 00:19:14.372879 ignition[968]: INFO : Stage: files Jan 17 00:19:14.375421 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:14.375421 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:14.375421 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:19:14.378480 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:19:14.378480 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:19:14.381171 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:19:14.382355 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:19:14.383646 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:19:14.383130 unknown[968]: wrote ssh authorized keys file for user: core Jan 17 00:19:14.386183 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:14.388413 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:19:14.558091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:19:14.560046 systemd-networkd[787]: eth1: Gained IPv6LL Jan 17 00:19:14.624613 systemd-networkd[787]: eth0: Gained IPv6LL Jan 17 00:19:14.859689 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:14.859689 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:14.862958 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:19:15.286094 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:19:15.620152 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:15.620152 ignition[968]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:15.624839 ignition[968]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:15.624839 ignition[968]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:15.624839 ignition[968]: INFO : files: files passed Jan 17 00:19:15.624839 ignition[968]: INFO : Ignition finished successfully Jan 17 00:19:15.627302 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:19:15.636172 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:19:15.643147 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:19:15.648185 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:19:15.648367 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:19:15.679958 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:15.679958 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:15.682640 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:15.686656 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:15.690238 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:19:15.697074 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:19:15.762264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:19:15.762528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:19:15.764791 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:19:15.766335 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:19:15.768293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:19:15.775209 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:19:15.803344 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:15.811121 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:19:15.843967 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:15.845345 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:15.847337 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:19:15.849308 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:19:15.849503 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:15.852210 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:19:15.854246 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:19:15.856042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:19:15.858003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:15.859952 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:15.861752 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:19:15.863668 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:15.865538 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:19:15.867380 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:19:15.869282 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:19:15.871086 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:19:15.871288 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:15.873967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:15.875805 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:15.877595 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:19:15.877783 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:15.879517 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:19:15.879696 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:15.882234 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:19:15.882428 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:15.884196 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:19:15.884374 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:19:15.885939 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:19:15.886106 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:15.897304 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:19:15.902233 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:19:15.904088 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:19:15.905187 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:15.913298 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:19:15.914586 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:15.924064 ignition[1020]: INFO : Ignition 2.19.0 Jan 17 00:19:15.924064 ignition[1020]: INFO : Stage: umount Jan 17 00:19:15.931333 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.931333 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:19:15.931333 ignition[1020]: INFO : umount: umount passed Jan 17 00:19:15.931333 ignition[1020]: INFO : Ignition finished successfully Jan 17 00:19:15.927288 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:19:15.927505 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:19:15.930557 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:19:15.930762 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:19:15.936086 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:19:15.936261 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:19:15.939988 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:19:15.940088 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:19:15.942288 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:19:15.942380 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:19:15.945060 systemd[1]: Stopped target network.target - Network. Jan 17 00:19:15.945927 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:19:15.946024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:15.947905 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:19:15.949597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:19:15.949951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:15.951405 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:19:15.952214 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:19:15.955104 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:19:15.955210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:15.956041 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:19:15.956116 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:15.959133 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:19:15.959232 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:19:15.961035 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:19:15.961109 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:15.962038 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:19:15.964687 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:19:15.969929 systemd-networkd[787]: eth1: DHCPv6 lease lost Jan 17 00:19:15.971059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:19:15.972097 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:19:15.972293 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:19:15.975020 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 17 00:19:15.977396 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:19:15.977616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:19:15.979298 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:19:15.979464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:19:15.983470 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:19:15.983561 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:15.984880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:19:15.984963 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:15.996030 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:19:15.997285 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:19:15.997384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:15.998090 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:19:15.998155 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:15.998802 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:19:15.999947 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:16.001106 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:19:16.001214 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:16.002443 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:16.024503 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:19:16.024788 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:16.028191 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:19:16.028364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:19:16.031411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:19:16.031515 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:16.032953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:19:16.033015 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:16.034203 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:19:16.034278 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:16.036280 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:19:16.036355 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:16.038347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:16.038422 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:16.046094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:19:16.046783 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:19:16.046903 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:16.047649 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:19:16.047716 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:16.051991 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:19:16.052071 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:16.053232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:16.053298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:16.059814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:19:16.060024 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:19:16.061729 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:19:16.067080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:19:16.080669 systemd[1]: Switching root. Jan 17 00:19:16.127336 systemd-journald[189]: Journal stopped Jan 17 00:19:17.801592 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Jan 17 00:19:17.801659 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:19:17.801677 kernel: SELinux: policy capability open_perms=1 Jan 17 00:19:17.801686 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:19:17.801698 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:19:17.801707 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:19:17.801715 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:19:17.801723 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:19:17.801732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:19:17.801740 kernel: audit: type=1403 audit(1768609156.386:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:19:17.801760 systemd[1]: Successfully loaded SELinux policy in 86.710ms. Jan 17 00:19:17.801787 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.756ms. Jan 17 00:19:17.801797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:17.801806 systemd[1]: Detected virtualization kvm. Jan 17 00:19:17.801815 systemd[1]: Detected architecture x86-64. Jan 17 00:19:17.801824 systemd[1]: Detected first boot. Jan 17 00:19:17.801835 systemd[1]: Hostname set to . Jan 17 00:19:17.801844 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:17.805715 zram_generator::config[1063]: No configuration found. Jan 17 00:19:17.805739 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:19:17.805750 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:19:17.805759 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:19:17.805769 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:17.805778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:19:17.805792 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:19:17.805801 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:19:17.805810 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:19:17.805819 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:19:17.805827 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:19:17.805836 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:19:17.805844 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:19:17.806241 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:17.806253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:17.806265 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:19:17.806274 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:19:17.806283 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:19:17.806292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:17.806301 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:19:17.806309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:17.806318 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:19:17.806329 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:19:17.806338 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:17.806347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:19:17.806355 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:17.806364 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:17.806373 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:17.806382 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:17.806390 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:19:17.806401 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:19:17.806410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:17.806423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:17.806439 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:17.806455 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:19:17.806468 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:19:17.806478 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:19:17.806490 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:19:17.806499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:17.806512 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:19:17.806521 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:19:17.806529 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:19:17.806538 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:19:17.806547 systemd[1]: Reached target machines.target - Containers. Jan 17 00:19:17.806555 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:19:17.806564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:17.806573 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:17.806584 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:19:17.806593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:17.806602 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:17.806613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:17.806622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:19:17.806630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:17.806639 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:19:17.806648 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:19:17.806661 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:19:17.806670 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:19:17.806679 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:19:17.806688 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:17.806696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:17.806705 kernel: loop: module loaded Jan 17 00:19:17.806714 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:19:17.806723 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:19:17.806732 kernel: fuse: init (API version 7.39) Jan 17 00:19:17.806743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:17.806755 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:19:17.806767 systemd[1]: Stopped verity-setup.service. Jan 17 00:19:17.806780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:17.806791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:19:17.806804 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:19:17.806826 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:19:17.806839 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:19:17.806874 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:19:17.806888 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:19:17.806901 kernel: ACPI: bus type drm_connector registered Jan 17 00:19:17.806911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:17.806923 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:19:17.806932 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:19:17.806961 systemd-journald[1132]: Collecting audit messages is disabled. Jan 17 00:19:17.806983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:17.806992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:17.807003 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:17.807012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:17.807021 systemd-journald[1132]: Journal started Jan 17 00:19:17.807036 systemd-journald[1132]: Runtime Journal (/run/log/journal/6bf3bcea570540fab8c8b36b631d73f5) is 8.0M, max 76.3M, 68.3M free. Jan 17 00:19:17.444009 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:19:17.474309 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:19:17.475199 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:19:17.813021 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:17.810350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:17.810487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:17.811148 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:19:17.811284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:19:17.811910 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:17.812035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:17.812620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:17.814550 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:19:17.815271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:19:17.824611 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:19:17.832935 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:19:17.838959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:19:17.840084 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:19:17.840124 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:17.842641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:19:17.851232 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:19:17.856677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:19:17.857216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:17.859937 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:19:17.862980 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:19:17.863597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:17.868961 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:19:17.869366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:17.873969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:17.881329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:19:17.887589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:19:17.891904 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:19:17.892762 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:19:17.894236 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:19:17.894794 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:19:17.906252 systemd-journald[1132]: Time spent on flushing to /var/log/journal/6bf3bcea570540fab8c8b36b631d73f5 is 22.252ms for 1181 entries. Jan 17 00:19:17.906252 systemd-journald[1132]: System Journal (/var/log/journal/6bf3bcea570540fab8c8b36b631d73f5) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:19:17.954049 systemd-journald[1132]: Received client request to flush runtime journal. Jan 17 00:19:17.954080 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:19:17.928550 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:19:17.929575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:19:17.940035 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:19:17.940722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:17.942977 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:19:17.958774 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:19:17.979234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:17.980449 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:19:17.981137 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:19:17.987165 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:19:17.989892 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:19:17.996693 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 17 00:19:17.996711 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 17 00:19:18.003785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:18.011177 kernel: loop1: detected capacity change from 0 to 8 Jan 17 00:19:18.010757 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:19:18.032886 kernel: loop2: detected capacity change from 0 to 229808 Jan 17 00:19:18.044057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:19:18.052337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:18.065594 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 17 00:19:18.065611 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 17 00:19:18.070696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:18.085873 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:19:18.129882 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:19:18.150328 kernel: loop5: detected capacity change from 0 to 8 Jan 17 00:19:18.153026 kernel: loop6: detected capacity change from 0 to 229808 Jan 17 00:19:18.176884 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 00:19:18.195756 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 00:19:18.196911 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 17 00:19:18.202877 systemd[1]: Reloading requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:19:18.203012 systemd[1]: Reloading... Jan 17 00:19:18.300882 zram_generator::config[1242]: No configuration found. Jan 17 00:19:18.360393 ldconfig[1177]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:19:18.421209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:18.458118 systemd[1]: Reloading finished in 254 ms. Jan 17 00:19:18.485797 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:19:18.486728 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:19:18.490298 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:19:18.496003 systemd[1]: Starting ensure-sysext.service... Jan 17 00:19:18.497966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:18.502355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:18.519952 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:19:18.520299 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:19:18.521096 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:19:18.521310 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 17 00:19:18.521386 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 17 00:19:18.525685 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:18.525700 systemd-tmpfiles[1285]: Skipping /boot Jan 17 00:19:18.526618 systemd[1]: Reloading requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:19:18.526738 systemd[1]: Reloading... Jan 17 00:19:18.554457 systemd-udevd[1286]: Using default interface naming scheme 'v255'. Jan 17 00:19:18.556138 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:18.556164 systemd-tmpfiles[1285]: Skipping /boot Jan 17 00:19:18.621894 zram_generator::config[1320]: No configuration found. Jan 17 00:19:18.751879 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1331) Jan 17 00:19:18.763883 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:19:18.772520 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:18.786880 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:19:18.804904 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:19:18.818803 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:19:18.818903 systemd[1]: Reloading finished in 291 ms. Jan 17 00:19:18.832037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:18.833296 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:18.867947 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 17 00:19:18.868728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.874888 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:19:18.875155 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:18.876880 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:19:18.882257 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:19:18.882447 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:19:18.882235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:19:18.882749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:18.884345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:18.887162 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:18.889733 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:19:18.896005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:18.896473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:18.898186 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:19:18.908027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:18.917003 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:18.919069 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:19:18.919902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.923920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.924068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:18.924222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:18.931744 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:19:18.932882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.934965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.935121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:18.941151 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:18.942065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:18.942152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.947898 systemd[1]: Finished ensure-sysext.service. Jan 17 00:19:18.949876 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:19:18.959368 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:19:18.975134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:18.976849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:18.977306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:18.979315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:18.990185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:18.990357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:18.990991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:18.996161 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:19:19.009095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:19.009263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:19.009996 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:19.010452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:19.013524 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:19.018111 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:19.018906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:19.031010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:19:19.032503 augenrules[1435]: No rules Jan 17 00:19:19.036425 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:19:19.037168 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:19.038934 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:19:19.045418 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:19:19.056846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:19:19.072967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:19:19.073398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:19:19.088241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:19:19.095687 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:19:19.102428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:19.112973 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 17 00:19:19.113023 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:19:19.116528 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 17 00:19:19.116741 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:19:19.116758 kernel: [drm] features: -context_init Jan 17 00:19:19.119053 kernel: [drm] number of scanouts: 1 Jan 17 00:19:19.119084 kernel: [drm] number of cap sets: 0 Jan 17 00:19:19.122932 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 00:19:19.130741 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:19:19.130785 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:19:19.136873 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:19:19.142892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:19.143081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:19.144309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:19.151125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:19.172093 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:19:19.173220 systemd-networkd[1408]: lo: Link UP Jan 17 00:19:19.173224 systemd-networkd[1408]: lo: Gained carrier Jan 17 00:19:19.176095 systemd-networkd[1408]: Enumeration completed Jan 17 00:19:19.180030 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:19:19.181417 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:19.182988 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:19:19.186996 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:19.187002 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:19.189951 systemd-networkd[1408]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:19.189957 systemd-networkd[1408]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:19.190967 systemd-networkd[1408]: eth0: Link UP Jan 17 00:19:19.191022 systemd-networkd[1408]: eth0: Gained carrier Jan 17 00:19:19.191058 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:19.198414 systemd-networkd[1408]: eth1: Link UP Jan 17 00:19:19.198421 systemd-networkd[1408]: eth1: Gained carrier Jan 17 00:19:19.198433 systemd-networkd[1408]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:19.199670 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:19:19.199887 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:19:19.204388 systemd-resolved[1409]: Positive Trust Anchors: Jan 17 00:19:19.204402 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:19.204424 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:19.205716 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:19.213255 systemd-resolved[1409]: Using system hostname 'ci-4081-3-6-n-8c81c3eeb1'. Jan 17 00:19:19.216591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:19.217730 systemd[1]: Reached target network.target - Network. Jan 17 00:19:19.217775 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:19.222219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:19.231469 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:19:19.231763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:19.231835 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:19.232010 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:19:19.232104 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:19:19.232363 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:19:19.232508 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:19:19.232572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:19:19.232626 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:19:19.232650 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:19.232694 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:19.234581 systemd-networkd[1408]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:19:19.235167 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Jan 17 00:19:19.236076 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:19:19.239773 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:19:19.254756 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:19:19.257098 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:19:19.259343 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:19:19.259774 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:19.261972 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:19.262977 systemd-networkd[1408]: eth0: DHCPv4 address 157.180.82.149/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:19:19.263554 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Jan 17 00:19:19.263717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:19.263755 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:19.267881 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:19.270156 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:19:19.273985 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:19:19.276255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:19:19.280019 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:19:19.288036 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:19:19.288677 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:19:19.289956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:19:19.292490 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:19:19.296985 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 00:19:19.299422 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:19:19.303040 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:19:19.320578 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:19:19.323344 coreos-metadata[1476]: Jan 17 00:19:19.323 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 00:19:19.322300 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:19:19.327062 coreos-metadata[1476]: Jan 17 00:19:19.324 INFO Fetch successful Jan 17 00:19:19.327062 coreos-metadata[1476]: Jan 17 00:19:19.325 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 00:19:19.327062 coreos-metadata[1476]: Jan 17 00:19:19.325 INFO Fetch successful Jan 17 00:19:19.322674 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:19:19.335282 jq[1478]: false Jan 17 00:19:19.330793 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:19:19.337075 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:19:19.347183 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:19:19.347151 dbus-daemon[1477]: [system] SELinux support is enabled Jan 17 00:19:19.347664 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:19:19.358317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:19:19.358472 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:19:19.358749 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:19:19.358906 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:19:19.359542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:19:19.359681 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:19:19.378692 update_engine[1494]: I20260117 00:19:19.378621 1494 main.cc:92] Flatcar Update Engine starting Jan 17 00:19:19.381673 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:19:19.381726 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:19:19.384046 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:19:19.384069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:19:19.386863 jq[1495]: true Jan 17 00:19:19.390460 update_engine[1494]: I20260117 00:19:19.390420 1494 update_check_scheduler.cc:74] Next update check in 10m45s Jan 17 00:19:19.394915 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:19:19.396520 extend-filesystems[1479]: Found loop4 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found loop5 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found loop6 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found loop7 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda1 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda2 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda3 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found usr Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda4 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda6 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda7 Jan 17 00:19:19.396520 extend-filesystems[1479]: Found sda9 Jan 17 00:19:19.396520 extend-filesystems[1479]: Checking size of /dev/sda9 Jan 17 00:19:19.429210 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:19:19.429522 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:19:19.447557 tar[1502]: linux-amd64/LICENSE Jan 17 00:19:19.447557 tar[1502]: linux-amd64/helm Jan 17 00:19:19.447785 jq[1513]: true Jan 17 00:19:19.449109 extend-filesystems[1479]: Resized partition /dev/sda9 Jan 17 00:19:19.459169 extend-filesystems[1527]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:19:19.468600 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 17 00:19:19.493239 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:19:19.498036 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:19:19.506833 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1340) Jan 17 00:19:19.528243 systemd-logind[1487]: New seat seat0. Jan 17 00:19:19.534788 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 00:19:19.534808 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:19:19.534979 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:19:19.623062 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:19.624787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:19:19.635263 systemd[1]: Starting sshkeys.service... Jan 17 00:19:19.665818 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:19:19.675710 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:19:19.685361 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:19:19.688828 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:19:19.714203 coreos-metadata[1560]: Jan 17 00:19:19.714 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 00:19:19.717349 coreos-metadata[1560]: Jan 17 00:19:19.717 INFO Fetch successful Jan 17 00:19:19.718712 containerd[1508]: time="2026-01-17T00:19:19.718619663Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:19:19.718167 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:19:19.725123 unknown[1560]: wrote ssh authorized keys file for user: core Jan 17 00:19:19.729050 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:19:19.737438 containerd[1508]: time="2026-01-17T00:19:19.737299156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.743885 containerd[1508]: time="2026-01-17T00:19:19.743072354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:19.743885 containerd[1508]: time="2026-01-17T00:19:19.743105674Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:19:19.743885 containerd[1508]: time="2026-01-17T00:19:19.743121674Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:19:19.744032 containerd[1508]: time="2026-01-17T00:19:19.744009715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:19:19.744723 containerd[1508]: time="2026-01-17T00:19:19.744333135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.744723 containerd[1508]: time="2026-01-17T00:19:19.744400375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:19.744723 containerd[1508]: time="2026-01-17T00:19:19.744410825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.745877 containerd[1508]: time="2026-01-17T00:19:19.745836817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:19.745966 containerd[1508]: time="2026-01-17T00:19:19.745955197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.746002 containerd[1508]: time="2026-01-17T00:19:19.745993097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:19.746031 containerd[1508]: time="2026-01-17T00:19:19.746021537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.746159 containerd[1508]: time="2026-01-17T00:19:19.746148367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.746472 containerd[1508]: time="2026-01-17T00:19:19.746458438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:19.746618 containerd[1508]: time="2026-01-17T00:19:19.746606408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:19.746650 containerd[1508]: time="2026-01-17T00:19:19.746642678Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:19:19.746752 containerd[1508]: time="2026-01-17T00:19:19.746740638Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:19:19.746832 containerd[1508]: time="2026-01-17T00:19:19.746823778Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:19:19.751408 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:19:19.751581 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:19:19.759062 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:19:19.769683 containerd[1508]: time="2026-01-17T00:19:19.769661967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:19:19.779107 containerd[1508]: time="2026-01-17T00:19:19.769743827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:19:19.779107 containerd[1508]: time="2026-01-17T00:19:19.769757427Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:19:19.779107 containerd[1508]: time="2026-01-17T00:19:19.769772087Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:19:19.779107 containerd[1508]: time="2026-01-17T00:19:19.769783477Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:19:19.770131 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:19:19.777167 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779372089Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779534559Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779616609Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779636749Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779646149Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779655519Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779665149Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779673659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779683819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779694059Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779703659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779712659Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779722199Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:19:19.780794 containerd[1508]: time="2026-01-17T00:19:19.779736809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.779392 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779746069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779754839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779764169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779773169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779782379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779790369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779799749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779808479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779818319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779828259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779836179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779844779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779874200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779892820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781085 containerd[1508]: time="2026-01-17T00:19:19.779909050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.780707 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779916520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779950470Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779962520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779970450Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779980460Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779987100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.779995390Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.780002620Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:19:19.781296 containerd[1508]: time="2026-01-17T00:19:19.780009570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:19:19.781410 containerd[1508]: time="2026-01-17T00:19:19.780186400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:19:19.781410 containerd[1508]: time="2026-01-17T00:19:19.780235890Z" level=info msg="Connect containerd service" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.782066032Z" level=info msg="using legacy CRI server" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.782098052Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.782193292Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.782742743Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783038543Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783077444Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783143824Z" level=info msg="Start subscribing containerd event" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783176864Z" level=info msg="Start recovering state" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783242894Z" level=info msg="Start event monitor" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783250774Z" level=info msg="Start snapshots syncer" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783262754Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783268654Z" level=info msg="Start streaming server" Jan 17 00:19:19.783996 containerd[1508]: time="2026-01-17T00:19:19.783309254Z" level=info msg="containerd successfully booted in 0.065376s" Jan 17 00:19:19.783951 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:19:19.800602 update-ssh-keys[1576]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:19.801582 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:19:19.804967 systemd[1]: Finished sshkeys.service. Jan 17 00:19:19.817887 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 17 00:19:19.847028 extend-filesystems[1527]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:19:19.847028 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 17 00:19:19.847028 extend-filesystems[1527]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 17 00:19:19.858813 extend-filesystems[1479]: Resized filesystem in /dev/sda9 Jan 17 00:19:19.858813 extend-filesystems[1479]: Found sr0 Jan 17 00:19:19.850020 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:19:19.850406 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:19:20.072966 tar[1502]: linux-amd64/README.md Jan 17 00:19:20.083016 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:19:20.896116 systemd-networkd[1408]: eth0: Gained IPv6LL Jan 17 00:19:20.897396 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Jan 17 00:19:20.897887 systemd-networkd[1408]: eth1: Gained IPv6LL Jan 17 00:19:20.898753 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Jan 17 00:19:20.904057 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:19:20.907351 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:19:20.917479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:20.929258 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:19:20.971599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:19:22.427199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:22.432680 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:19:22.434954 systemd[1]: Startup finished in 1.600s (kernel) + 6.540s (initrd) + 6.133s (userspace) = 14.274s. Jan 17 00:19:22.440185 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:23.327581 kubelet[1608]: E0117 00:19:23.327501 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:23.333925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:23.334375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:23.335072 systemd[1]: kubelet.service: Consumed 1.794s CPU time. Jan 17 00:19:24.461762 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:19:24.467497 systemd[1]: Started sshd@0-157.180.82.149:22-20.161.92.111:47478.service - OpenSSH per-connection server daemon (20.161.92.111:47478). Jan 17 00:19:25.252207 sshd[1620]: Accepted publickey for core from 20.161.92.111 port 47478 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:25.256278 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:25.275197 systemd-logind[1487]: New session 1 of user core. Jan 17 00:19:25.277088 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:19:25.285274 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:19:25.324687 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:19:25.334447 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:19:25.351793 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:19:25.507575 systemd[1624]: Queued start job for default target default.target. Jan 17 00:19:25.517991 systemd[1624]: Created slice app.slice - User Application Slice. Jan 17 00:19:25.518015 systemd[1624]: Reached target paths.target - Paths. Jan 17 00:19:25.518025 systemd[1624]: Reached target timers.target - Timers. Jan 17 00:19:25.519494 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:19:25.554846 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:19:25.555169 systemd[1624]: Reached target sockets.target - Sockets. Jan 17 00:19:25.555200 systemd[1624]: Reached target basic.target - Basic System. Jan 17 00:19:25.555313 systemd[1624]: Reached target default.target - Main User Target. Jan 17 00:19:25.555393 systemd[1624]: Startup finished in 190ms. Jan 17 00:19:25.556159 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:19:25.571199 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:19:26.126649 systemd[1]: Started sshd@1-157.180.82.149:22-20.161.92.111:47492.service - OpenSSH per-connection server daemon (20.161.92.111:47492). Jan 17 00:19:26.879720 sshd[1635]: Accepted publickey for core from 20.161.92.111 port 47492 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:26.883091 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:26.893962 systemd-logind[1487]: New session 2 of user core. Jan 17 00:19:26.907133 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:19:27.418054 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:27.425151 systemd[1]: sshd@1-157.180.82.149:22-20.161.92.111:47492.service: Deactivated successfully. Jan 17 00:19:27.428808 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:19:27.429808 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:19:27.431693 systemd-logind[1487]: Removed session 2. Jan 17 00:19:27.562676 systemd[1]: Started sshd@2-157.180.82.149:22-20.161.92.111:47496.service - OpenSSH per-connection server daemon (20.161.92.111:47496). Jan 17 00:19:28.343383 sshd[1642]: Accepted publickey for core from 20.161.92.111 port 47496 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:28.346490 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:28.354400 systemd-logind[1487]: New session 3 of user core. Jan 17 00:19:28.365137 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:19:28.873627 sshd[1642]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:28.880787 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:19:28.882462 systemd[1]: sshd@2-157.180.82.149:22-20.161.92.111:47496.service: Deactivated successfully. Jan 17 00:19:28.886313 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:19:28.888144 systemd-logind[1487]: Removed session 3. Jan 17 00:19:29.010337 systemd[1]: Started sshd@3-157.180.82.149:22-20.161.92.111:47504.service - OpenSSH per-connection server daemon (20.161.92.111:47504). Jan 17 00:19:29.780035 sshd[1649]: Accepted publickey for core from 20.161.92.111 port 47504 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:29.783159 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:29.791218 systemd-logind[1487]: New session 4 of user core. Jan 17 00:19:29.801117 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:19:30.315846 sshd[1649]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:30.322141 systemd[1]: sshd@3-157.180.82.149:22-20.161.92.111:47504.service: Deactivated successfully. Jan 17 00:19:30.326196 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:19:30.327419 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:19:30.329203 systemd-logind[1487]: Removed session 4. Jan 17 00:19:30.457271 systemd[1]: Started sshd@4-157.180.82.149:22-20.161.92.111:47510.service - OpenSSH per-connection server daemon (20.161.92.111:47510). Jan 17 00:19:31.229540 sshd[1656]: Accepted publickey for core from 20.161.92.111 port 47510 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:31.232283 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:31.240917 systemd-logind[1487]: New session 5 of user core. Jan 17 00:19:31.251099 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:19:31.655814 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:19:31.656553 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:31.677832 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:31.801097 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:31.806893 systemd[1]: sshd@4-157.180.82.149:22-20.161.92.111:47510.service: Deactivated successfully. Jan 17 00:19:31.811144 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:19:31.813732 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:19:31.815743 systemd-logind[1487]: Removed session 5. Jan 17 00:19:31.937284 systemd[1]: Started sshd@5-157.180.82.149:22-20.161.92.111:59902.service - OpenSSH per-connection server daemon (20.161.92.111:59902). Jan 17 00:19:32.709230 sshd[1664]: Accepted publickey for core from 20.161.92.111 port 59902 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:32.713282 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:32.722955 systemd-logind[1487]: New session 6 of user core. Jan 17 00:19:32.734181 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:19:33.125060 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:19:33.125761 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:33.133282 sudo[1668]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:33.146357 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:19:33.147356 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:33.173396 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:33.177140 auditctl[1671]: No rules Jan 17 00:19:33.178049 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:19:33.178457 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:33.188535 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:33.247964 augenrules[1689]: No rules Jan 17 00:19:33.250967 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:33.253196 sudo[1667]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:33.376435 sshd[1664]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:33.384044 systemd[1]: sshd@5-157.180.82.149:22-20.161.92.111:59902.service: Deactivated successfully. Jan 17 00:19:33.387643 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:19:33.389429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:33.390577 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:19:33.397500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:33.398973 systemd-logind[1487]: Removed session 6. Jan 17 00:19:33.506999 systemd[1]: Started sshd@6-157.180.82.149:22-20.161.92.111:59906.service - OpenSSH per-connection server daemon (20.161.92.111:59906). Jan 17 00:19:33.575196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:33.578998 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:33.609743 kubelet[1707]: E0117 00:19:33.609706 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:33.619461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:33.619623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:34.287411 sshd[1700]: Accepted publickey for core from 20.161.92.111 port 59906 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:19:34.290181 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:34.298491 systemd-logind[1487]: New session 7 of user core. Jan 17 00:19:34.306082 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:19:34.703202 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:19:34.703925 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:35.170288 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:19:35.174659 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:19:35.632190 dockerd[1732]: time="2026-01-17T00:19:35.631971123Z" level=info msg="Starting up" Jan 17 00:19:35.802688 dockerd[1732]: time="2026-01-17T00:19:35.802062896Z" level=info msg="Loading containers: start." Jan 17 00:19:36.011918 kernel: Initializing XFRM netlink socket Jan 17 00:19:36.059219 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Jan 17 00:19:36.830862 systemd-resolved[1409]: Clock change detected. Flushing caches. Jan 17 00:19:36.831324 systemd-timesyncd[1417]: Contacted time server 139.162.156.95:123 (2.flatcar.pool.ntp.org). Jan 17 00:19:36.831428 systemd-timesyncd[1417]: Initial clock synchronization to Sat 2026-01-17 00:19:36.830795 UTC. Jan 17 00:19:36.905771 systemd-networkd[1408]: docker0: Link UP Jan 17 00:19:36.936766 dockerd[1732]: time="2026-01-17T00:19:36.936670121Z" level=info msg="Loading containers: done." Jan 17 00:19:36.966043 dockerd[1732]: time="2026-01-17T00:19:36.965895628Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:19:36.966476 dockerd[1732]: time="2026-01-17T00:19:36.966131858Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:19:36.966476 dockerd[1732]: time="2026-01-17T00:19:36.966334278Z" level=info msg="Daemon has completed initialization" Jan 17 00:19:37.018906 dockerd[1732]: time="2026-01-17T00:19:37.018740394Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:19:37.019347 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:19:38.683322 containerd[1508]: time="2026-01-17T00:19:38.683241244Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:19:39.306923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748082540.mount: Deactivated successfully. Jan 17 00:19:40.381744 containerd[1508]: time="2026-01-17T00:19:40.381684127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:40.382892 containerd[1508]: time="2026-01-17T00:19:40.382738548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114812" Jan 17 00:19:40.384626 containerd[1508]: time="2026-01-17T00:19:40.383799610Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:40.386005 containerd[1508]: time="2026-01-17T00:19:40.385974283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:40.387233 containerd[1508]: time="2026-01-17T00:19:40.386760174Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.703467369s" Jan 17 00:19:40.387233 containerd[1508]: time="2026-01-17T00:19:40.386797194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:19:40.387288 containerd[1508]: time="2026-01-17T00:19:40.387233094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:19:41.814851 containerd[1508]: time="2026-01-17T00:19:41.814777608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.816564 containerd[1508]: time="2026-01-17T00:19:41.816229510Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016803" Jan 17 00:19:41.819643 containerd[1508]: time="2026-01-17T00:19:41.817829182Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.821551 containerd[1508]: time="2026-01-17T00:19:41.821508047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.823091 containerd[1508]: time="2026-01-17T00:19:41.823045659Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.435790435s" Jan 17 00:19:41.823217 containerd[1508]: time="2026-01-17T00:19:41.823196569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:19:41.823832 containerd[1508]: time="2026-01-17T00:19:41.823764500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:19:43.097958 containerd[1508]: time="2026-01-17T00:19:43.097886542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.099691 containerd[1508]: time="2026-01-17T00:19:43.099341924Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158124" Jan 17 00:19:43.103443 containerd[1508]: time="2026-01-17T00:19:43.102588188Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.104989 containerd[1508]: time="2026-01-17T00:19:43.104939801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.106865 containerd[1508]: time="2026-01-17T00:19:43.105716352Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.281762872s" Jan 17 00:19:43.106865 containerd[1508]: time="2026-01-17T00:19:43.105742982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:19:43.107196 containerd[1508]: time="2026-01-17T00:19:43.107140594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:19:44.302002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541072825.mount: Deactivated successfully. Jan 17 00:19:44.399420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:19:44.408836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:44.558718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:44.560943 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:44.590524 kubelet[1952]: E0117 00:19:44.590491 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:44.594037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:44.594210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:44.729555 containerd[1508]: time="2026-01-17T00:19:44.729509322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:44.730790 containerd[1508]: time="2026-01-17T00:19:44.730687283Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930124" Jan 17 00:19:44.732511 containerd[1508]: time="2026-01-17T00:19:44.731713204Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:44.734044 containerd[1508]: time="2026-01-17T00:19:44.733529537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:44.734044 containerd[1508]: time="2026-01-17T00:19:44.733946917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.626755873s" Jan 17 00:19:44.734044 containerd[1508]: time="2026-01-17T00:19:44.733967897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:19:44.734576 containerd[1508]: time="2026-01-17T00:19:44.734563498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:19:45.237987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619493794.mount: Deactivated successfully. Jan 17 00:19:46.351806 containerd[1508]: time="2026-01-17T00:19:46.351722919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.353542 containerd[1508]: time="2026-01-17T00:19:46.353153601Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jan 17 00:19:46.355211 containerd[1508]: time="2026-01-17T00:19:46.354654493Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.359845 containerd[1508]: time="2026-01-17T00:19:46.359794259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.362029 containerd[1508]: time="2026-01-17T00:19:46.361952702Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.627359944s" Jan 17 00:19:46.362102 containerd[1508]: time="2026-01-17T00:19:46.362034812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:19:46.362890 containerd[1508]: time="2026-01-17T00:19:46.362817193Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:19:46.854542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361408306.mount: Deactivated successfully. Jan 17 00:19:46.863626 containerd[1508]: time="2026-01-17T00:19:46.863487579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.864910 containerd[1508]: time="2026-01-17T00:19:46.864707410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 17 00:19:46.867635 containerd[1508]: time="2026-01-17T00:19:46.865928032Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.870095 containerd[1508]: time="2026-01-17T00:19:46.870048147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.871479 containerd[1508]: time="2026-01-17T00:19:46.871418969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.548906ms" Jan 17 00:19:46.871479 containerd[1508]: time="2026-01-17T00:19:46.871475569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:19:46.873325 containerd[1508]: time="2026-01-17T00:19:46.873264801Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:19:47.418129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251257792.mount: Deactivated successfully. Jan 17 00:19:49.015364 containerd[1508]: time="2026-01-17T00:19:49.015286719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:49.017340 containerd[1508]: time="2026-01-17T00:19:49.016895041Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926291" Jan 17 00:19:49.018633 containerd[1508]: time="2026-01-17T00:19:49.018468333Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:49.024622 containerd[1508]: time="2026-01-17T00:19:49.023393379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:49.028499 containerd[1508]: time="2026-01-17T00:19:49.028461475Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.155142464s" Jan 17 00:19:49.028702 containerd[1508]: time="2026-01-17T00:19:49.028651915Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:19:53.016782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:53.023917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:53.051756 systemd[1]: Reloading requested from client PID 2099 ('systemctl') (unit session-7.scope)... Jan 17 00:19:53.051850 systemd[1]: Reloading... Jan 17 00:19:53.152849 zram_generator::config[2139]: No configuration found. Jan 17 00:19:53.243033 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:53.303917 systemd[1]: Reloading finished in 251 ms. Jan 17 00:19:53.350928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:53.354350 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:53.362698 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:53.364875 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:19:53.365364 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:53.373947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:53.487931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:53.491348 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:53.542322 kubelet[2200]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:53.542675 kubelet[2200]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:19:53.542717 kubelet[2200]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:53.543968 kubelet[2200]: I0117 00:19:53.543937 2200 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:19:54.214541 kubelet[2200]: I0117 00:19:54.214503 2200 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:19:54.214541 kubelet[2200]: I0117 00:19:54.214528 2200 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:19:54.214756 kubelet[2200]: I0117 00:19:54.214738 2200 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:19:54.239625 kubelet[2200]: I0117 00:19:54.239360 2200 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:19:54.239625 kubelet[2200]: E0117 00:19:54.239564 2200 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://157.180.82.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:19:54.246479 kubelet[2200]: E0117 00:19:54.246444 2200 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:19:54.246479 kubelet[2200]: I0117 00:19:54.246471 2200 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:19:54.250688 kubelet[2200]: I0117 00:19:54.250667 2200 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:19:54.250898 kubelet[2200]: I0117 00:19:54.250877 2200 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:19:54.251000 kubelet[2200]: I0117 00:19:54.250892 2200 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8c81c3eeb1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:19:54.251076 kubelet[2200]: I0117 00:19:54.251000 2200 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:19:54.251076 kubelet[2200]: I0117 00:19:54.251007 2200 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:19:54.251756 kubelet[2200]: I0117 00:19:54.251738 2200 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:54.253578 kubelet[2200]: I0117 00:19:54.253562 2200 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:19:54.253578 kubelet[2200]: I0117 00:19:54.253574 2200 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:19:54.253806 kubelet[2200]: I0117 00:19:54.253604 2200 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:19:54.255657 kubelet[2200]: I0117 00:19:54.255380 2200 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:19:54.257318 kubelet[2200]: E0117 00:19:54.257299 2200 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.82.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8c81c3eeb1&limit=500&resourceVersion=0\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:54.259427 kubelet[2200]: E0117 00:19:54.259407 2200 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.82.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:54.259698 kubelet[2200]: I0117 00:19:54.259673 2200 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:19:54.260538 kubelet[2200]: I0117 00:19:54.259981 2200 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:19:54.261513 kubelet[2200]: W0117 00:19:54.261084 2200 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:19:54.263619 kubelet[2200]: I0117 00:19:54.263241 2200 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:19:54.263619 kubelet[2200]: I0117 00:19:54.263283 2200 server.go:1289] "Started kubelet" Jan 17 00:19:54.265735 kubelet[2200]: I0117 00:19:54.265260 2200 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:19:54.266008 kubelet[2200]: I0117 00:19:54.265989 2200 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:19:54.267557 kubelet[2200]: I0117 00:19:54.267181 2200 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:19:54.267557 kubelet[2200]: I0117 00:19:54.267410 2200 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:19:54.269450 kubelet[2200]: E0117 00:19:54.267480 2200 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.82.149:6443/api/v1/namespaces/default/events\": dial tcp 157.180.82.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-8c81c3eeb1.188b5cb1128d42c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-8c81c3eeb1,UID:ci-4081-3-6-n-8c81c3eeb1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8c81c3eeb1,},FirstTimestamp:2026-01-17 00:19:54.263265988 +0000 UTC m=+0.768689642,LastTimestamp:2026-01-17 00:19:54.263265988 +0000 UTC m=+0.768689642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8c81c3eeb1,}" Jan 17 00:19:54.269450 kubelet[2200]: I0117 00:19:54.269336 2200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:19:54.270107 kubelet[2200]: I0117 00:19:54.269973 2200 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:19:54.273893 kubelet[2200]: E0117 00:19:54.273865 2200 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" Jan 17 00:19:54.273933 kubelet[2200]: I0117 00:19:54.273908 2200 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:19:54.274085 kubelet[2200]: I0117 00:19:54.274065 2200 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:19:54.274174 kubelet[2200]: I0117 00:19:54.274156 2200 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:19:54.274623 kubelet[2200]: E0117 00:19:54.274565 2200 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.82.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:54.275282 kubelet[2200]: I0117 00:19:54.275244 2200 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:19:54.275369 kubelet[2200]: I0117 00:19:54.275347 2200 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:19:54.277109 kubelet[2200]: E0117 00:19:54.277081 2200 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:19:54.277712 kubelet[2200]: I0117 00:19:54.277219 2200 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:19:54.288165 kubelet[2200]: E0117 00:19:54.288129 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.82.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8c81c3eeb1?timeout=10s\": dial tcp 157.180.82.149:6443: connect: connection refused" interval="200ms" Jan 17 00:19:54.291341 kubelet[2200]: I0117 00:19:54.291323 2200 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:19:54.294006 kubelet[2200]: I0117 00:19:54.293995 2200 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:19:54.294065 kubelet[2200]: I0117 00:19:54.294059 2200 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:19:54.294103 kubelet[2200]: I0117 00:19:54.294097 2200 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:19:54.294131 kubelet[2200]: I0117 00:19:54.294126 2200 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:19:54.294199 kubelet[2200]: E0117 00:19:54.294189 2200 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:19:54.302782 kubelet[2200]: E0117 00:19:54.302766 2200 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.82.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:54.305181 kubelet[2200]: I0117 00:19:54.305158 2200 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:19:54.305234 kubelet[2200]: I0117 00:19:54.305227 2200 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:19:54.305291 kubelet[2200]: I0117 00:19:54.305286 2200 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:54.307468 kubelet[2200]: I0117 00:19:54.307459 2200 policy_none.go:49] "None policy: Start" Jan 17 00:19:54.307519 kubelet[2200]: I0117 00:19:54.307512 2200 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:19:54.307560 kubelet[2200]: I0117 00:19:54.307554 2200 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:19:54.312587 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:19:54.325811 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:19:54.330055 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:19:54.337414 kubelet[2200]: E0117 00:19:54.337401 2200 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:19:54.337664 kubelet[2200]: I0117 00:19:54.337656 2200 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:19:54.338658 kubelet[2200]: I0117 00:19:54.338281 2200 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:19:54.338980 kubelet[2200]: I0117 00:19:54.338952 2200 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:19:54.339878 kubelet[2200]: E0117 00:19:54.339756 2200 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:19:54.339878 kubelet[2200]: E0117 00:19:54.339798 2200 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-8c81c3eeb1\" not found" Jan 17 00:19:54.414147 systemd[1]: Created slice kubepods-burstable-pod916e5e3fac68011d0778b83418892384.slice - libcontainer container kubepods-burstable-pod916e5e3fac68011d0778b83418892384.slice. Jan 17 00:19:54.428151 kubelet[2200]: E0117 00:19:54.428064 2200 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.433091 systemd[1]: Created slice kubepods-burstable-pod5ffd02a36cdd6e44b37f5cff74b11c6c.slice - libcontainer container kubepods-burstable-pod5ffd02a36cdd6e44b37f5cff74b11c6c.slice. Jan 17 00:19:54.437327 kubelet[2200]: E0117 00:19:54.436975 2200 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.441683 kubelet[2200]: I0117 00:19:54.441184 2200 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.441683 kubelet[2200]: E0117 00:19:54.441645 2200 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.82.149:6443/api/v1/nodes\": dial tcp 157.180.82.149:6443: connect: connection refused" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.444325 systemd[1]: Created slice kubepods-burstable-podbae799b8ea820963b5009dea577b6708.slice - libcontainer container kubepods-burstable-podbae799b8ea820963b5009dea577b6708.slice. Jan 17 00:19:54.447844 kubelet[2200]: E0117 00:19:54.447793 2200 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.488972 kubelet[2200]: E0117 00:19:54.488835 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.82.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8c81c3eeb1?timeout=10s\": dial tcp 157.180.82.149:6443: connect: connection refused" interval="400ms" Jan 17 00:19:54.575047 kubelet[2200]: I0117 00:19:54.574973 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/916e5e3fac68011d0778b83418892384-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"916e5e3fac68011d0778b83418892384\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575047 kubelet[2200]: I0117 00:19:54.575029 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/916e5e3fac68011d0778b83418892384-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"916e5e3fac68011d0778b83418892384\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575785 kubelet[2200]: I0117 00:19:54.575087 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/916e5e3fac68011d0778b83418892384-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"916e5e3fac68011d0778b83418892384\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575785 kubelet[2200]: I0117 00:19:54.575116 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575785 kubelet[2200]: I0117 00:19:54.575152 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575785 kubelet[2200]: I0117 00:19:54.575175 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575785 kubelet[2200]: I0117 00:19:54.575199 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575989 kubelet[2200]: I0117 00:19:54.575230 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bae799b8ea820963b5009dea577b6708-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"bae799b8ea820963b5009dea577b6708\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.575989 kubelet[2200]: I0117 00:19:54.575270 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.644301 kubelet[2200]: I0117 00:19:54.644225 2200 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.644700 kubelet[2200]: E0117 00:19:54.644642 2200 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.82.149:6443/api/v1/nodes\": dial tcp 157.180.82.149:6443: connect: connection refused" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:54.729682 containerd[1508]: time="2026-01-17T00:19:54.729581291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8c81c3eeb1,Uid:916e5e3fac68011d0778b83418892384,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:54.737970 containerd[1508]: time="2026-01-17T00:19:54.737893581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1,Uid:5ffd02a36cdd6e44b37f5cff74b11c6c,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:54.749649 containerd[1508]: time="2026-01-17T00:19:54.748844125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8c81c3eeb1,Uid:bae799b8ea820963b5009dea577b6708,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:54.890323 kubelet[2200]: E0117 00:19:54.890234 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.82.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8c81c3eeb1?timeout=10s\": dial tcp 157.180.82.149:6443: connect: connection refused" interval="800ms" Jan 17 00:19:55.049196 kubelet[2200]: I0117 00:19:55.049046 2200 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:55.049709 kubelet[2200]: E0117 00:19:55.049661 2200 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.82.149:6443/api/v1/nodes\": dial tcp 157.180.82.149:6443: connect: connection refused" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:55.218701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847182191.mount: Deactivated successfully. Jan 17 00:19:55.227180 containerd[1508]: time="2026-01-17T00:19:55.227107333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:55.229471 containerd[1508]: time="2026-01-17T00:19:55.229407226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 17 00:19:55.229823 containerd[1508]: time="2026-01-17T00:19:55.229526376Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:55.230757 containerd[1508]: time="2026-01-17T00:19:55.230705467Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:55.232755 containerd[1508]: time="2026-01-17T00:19:55.232589770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:55.232755 containerd[1508]: time="2026-01-17T00:19:55.232645000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:55.232755 containerd[1508]: time="2026-01-17T00:19:55.232723320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:55.236408 containerd[1508]: time="2026-01-17T00:19:55.236384174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:55.238659 containerd[1508]: time="2026-01-17T00:19:55.237961316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.042611ms" Jan 17 00:19:55.240123 containerd[1508]: time="2026-01-17T00:19:55.240064579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.092498ms" Jan 17 00:19:55.243193 containerd[1508]: time="2026-01-17T00:19:55.243138593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.284012ms" Jan 17 00:19:55.269377 kubelet[2200]: E0117 00:19:55.267201 2200 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.82.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:55.403492 containerd[1508]: time="2026-01-17T00:19:55.402736852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:55.403492 containerd[1508]: time="2026-01-17T00:19:55.402852153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:55.403492 containerd[1508]: time="2026-01-17T00:19:55.402871413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:55.403492 containerd[1508]: time="2026-01-17T00:19:55.403057653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:55.409860 containerd[1508]: time="2026-01-17T00:19:55.409065250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:55.409860 containerd[1508]: time="2026-01-17T00:19:55.409126170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:55.409860 containerd[1508]: time="2026-01-17T00:19:55.409142500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:55.409860 containerd[1508]: time="2026-01-17T00:19:55.409300751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:55.432198 containerd[1508]: time="2026-01-17T00:19:55.431679909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:55.432198 containerd[1508]: time="2026-01-17T00:19:55.431753189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:55.432198 containerd[1508]: time="2026-01-17T00:19:55.431768979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:55.432198 containerd[1508]: time="2026-01-17T00:19:55.431882899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:55.459878 systemd[1]: Started cri-containerd-98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654.scope - libcontainer container 98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654. Jan 17 00:19:55.462960 systemd[1]: Started cri-containerd-cc507d81c4ea764204c9bf13c1812c77d724b33ba309d2d049624dd5994af12f.scope - libcontainer container cc507d81c4ea764204c9bf13c1812c77d724b33ba309d2d049624dd5994af12f. Jan 17 00:19:55.472906 systemd[1]: Started cri-containerd-4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3.scope - libcontainer container 4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3. Jan 17 00:19:55.510919 kubelet[2200]: E0117 00:19:55.510772 2200 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.82.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.82.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:55.511837 containerd[1508]: time="2026-01-17T00:19:55.511807899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8c81c3eeb1,Uid:916e5e3fac68011d0778b83418892384,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc507d81c4ea764204c9bf13c1812c77d724b33ba309d2d049624dd5994af12f\"" Jan 17 00:19:55.517419 containerd[1508]: time="2026-01-17T00:19:55.517378346Z" level=info msg="CreateContainer within sandbox \"cc507d81c4ea764204c9bf13c1812c77d724b33ba309d2d049624dd5994af12f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:19:55.551424 containerd[1508]: time="2026-01-17T00:19:55.551369638Z" level=info msg="CreateContainer within sandbox \"cc507d81c4ea764204c9bf13c1812c77d724b33ba309d2d049624dd5994af12f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b8c00985d05db7148103309d3a93d16495d178c0c8597ad28489f4d6c027983\"" Jan 17 00:19:55.556185 containerd[1508]: time="2026-01-17T00:19:55.556095274Z" level=info msg="StartContainer for \"3b8c00985d05db7148103309d3a93d16495d178c0c8597ad28489f4d6c027983\"" Jan 17 00:19:55.560120 containerd[1508]: time="2026-01-17T00:19:55.559804339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8c81c3eeb1,Uid:bae799b8ea820963b5009dea577b6708,Namespace:kube-system,Attempt:0,} returns sandbox id \"98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654\"" Jan 17 00:19:55.563043 containerd[1508]: time="2026-01-17T00:19:55.563027493Z" level=info msg="CreateContainer within sandbox \"98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:19:55.568824 containerd[1508]: time="2026-01-17T00:19:55.568789770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1,Uid:5ffd02a36cdd6e44b37f5cff74b11c6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3\"" Jan 17 00:19:55.575124 containerd[1508]: time="2026-01-17T00:19:55.575088158Z" level=info msg="CreateContainer within sandbox \"4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:19:55.584624 containerd[1508]: time="2026-01-17T00:19:55.584377219Z" level=info msg="CreateContainer within sandbox \"98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b\"" Jan 17 00:19:55.584793 containerd[1508]: time="2026-01-17T00:19:55.584762120Z" level=info msg="StartContainer for \"9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b\"" Jan 17 00:19:55.597919 containerd[1508]: time="2026-01-17T00:19:55.597854356Z" level=info msg="CreateContainer within sandbox \"4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9\"" Jan 17 00:19:55.598787 containerd[1508]: time="2026-01-17T00:19:55.598728637Z" level=info msg="StartContainer for \"700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9\"" Jan 17 00:19:55.614759 systemd[1]: Started cri-containerd-3b8c00985d05db7148103309d3a93d16495d178c0c8597ad28489f4d6c027983.scope - libcontainer container 3b8c00985d05db7148103309d3a93d16495d178c0c8597ad28489f4d6c027983. Jan 17 00:19:55.631980 systemd[1]: Started cri-containerd-700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9.scope - libcontainer container 700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9. Jan 17 00:19:55.635488 systemd[1]: Started cri-containerd-9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b.scope - libcontainer container 9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b. Jan 17 00:19:55.679683 containerd[1508]: time="2026-01-17T00:19:55.679442978Z" level=info msg="StartContainer for \"9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b\" returns successfully" Jan 17 00:19:55.681691 containerd[1508]: time="2026-01-17T00:19:55.681575301Z" level=info msg="StartContainer for \"3b8c00985d05db7148103309d3a93d16495d178c0c8597ad28489f4d6c027983\" returns successfully" Jan 17 00:19:55.691009 kubelet[2200]: E0117 00:19:55.690754 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.82.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8c81c3eeb1?timeout=10s\": dial tcp 157.180.82.149:6443: connect: connection refused" interval="1.6s" Jan 17 00:19:55.696786 containerd[1508]: time="2026-01-17T00:19:55.696765420Z" level=info msg="StartContainer for \"700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9\" returns successfully" Jan 17 00:19:55.851311 kubelet[2200]: I0117 00:19:55.851278 2200 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:56.314806 kubelet[2200]: E0117 00:19:56.314676 2200 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:56.315088 kubelet[2200]: E0117 00:19:56.315069 2200 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:56.316404 kubelet[2200]: E0117 00:19:56.316386 2200 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.007112 kubelet[2200]: I0117 00:19:57.007027 2200 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.081293 kubelet[2200]: I0117 00:19:57.080951 2200 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.089556 kubelet[2200]: E0117 00:19:57.089337 2200 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.089556 kubelet[2200]: I0117 00:19:57.089368 2200 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.091734 kubelet[2200]: E0117 00:19:57.090970 2200 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.091734 kubelet[2200]: I0117 00:19:57.090997 2200 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.094213 kubelet[2200]: E0117 00:19:57.094166 2200 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-8c81c3eeb1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.261811 kubelet[2200]: I0117 00:19:57.260809 2200 apiserver.go:52] "Watching apiserver" Jan 17 00:19:57.274923 kubelet[2200]: I0117 00:19:57.274880 2200 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:19:57.315998 kubelet[2200]: I0117 00:19:57.315946 2200 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.317110 kubelet[2200]: I0117 00:19:57.316744 2200 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.318375 kubelet[2200]: E0117 00:19:57.318326 2200 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-8c81c3eeb1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:57.320220 kubelet[2200]: E0117 00:19:57.320165 2200 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:58.606959 kubelet[2200]: I0117 00:19:58.606855 2200 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:58.970973 systemd[1]: Reloading requested from client PID 2483 ('systemctl') (unit session-7.scope)... Jan 17 00:19:58.970999 systemd[1]: Reloading... Jan 17 00:19:59.119647 zram_generator::config[2526]: No configuration found. Jan 17 00:19:59.234183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:59.305326 systemd[1]: Reloading finished in 333 ms. Jan 17 00:19:59.354482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:59.369403 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:19:59.369820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:59.369904 systemd[1]: kubelet.service: Consumed 1.176s CPU time, 134.2M memory peak, 0B memory swap peak. Jan 17 00:19:59.377132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:59.509396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:59.513817 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:59.576736 kubelet[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:59.576736 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:19:59.576736 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:59.577068 kubelet[2574]: I0117 00:19:59.576835 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:19:59.587653 kubelet[2574]: I0117 00:19:59.587525 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:19:59.587653 kubelet[2574]: I0117 00:19:59.587543 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:19:59.587854 kubelet[2574]: I0117 00:19:59.587683 2574 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:19:59.588514 kubelet[2574]: I0117 00:19:59.588445 2574 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:19:59.590217 kubelet[2574]: I0117 00:19:59.589848 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:19:59.593990 kubelet[2574]: E0117 00:19:59.593959 2574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:19:59.594160 kubelet[2574]: I0117 00:19:59.594142 2574 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:19:59.605195 kubelet[2574]: I0117 00:19:59.605147 2574 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:19:59.605758 kubelet[2574]: I0117 00:19:59.605725 2574 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:19:59.606849 kubelet[2574]: I0117 00:19:59.605860 2574 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8c81c3eeb1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:19:59.606849 kubelet[2574]: I0117 00:19:59.606396 2574 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:19:59.606849 kubelet[2574]: I0117 00:19:59.606411 2574 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:19:59.606849 kubelet[2574]: I0117 00:19:59.606495 2574 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:59.607143 kubelet[2574]: I0117 00:19:59.606880 2574 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:19:59.607143 kubelet[2574]: I0117 00:19:59.606903 2574 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:19:59.607143 kubelet[2574]: I0117 00:19:59.606938 2574 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:19:59.607143 kubelet[2574]: I0117 00:19:59.606954 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:19:59.610648 kubelet[2574]: I0117 00:19:59.609926 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:19:59.610740 kubelet[2574]: I0117 00:19:59.610705 2574 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:19:59.614747 kubelet[2574]: I0117 00:19:59.614717 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:19:59.614822 kubelet[2574]: I0117 00:19:59.614779 2574 server.go:1289] "Started kubelet" Jan 17 00:19:59.618160 kubelet[2574]: I0117 00:19:59.618099 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:19:59.618587 kubelet[2574]: I0117 00:19:59.618567 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:19:59.620156 kubelet[2574]: I0117 00:19:59.619980 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:19:59.631075 kubelet[2574]: I0117 00:19:59.631018 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:19:59.635615 kubelet[2574]: I0117 00:19:59.634639 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:19:59.636263 kubelet[2574]: I0117 00:19:59.636230 2574 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:19:59.636473 kubelet[2574]: E0117 00:19:59.636436 2574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8c81c3eeb1\" not found" Jan 17 00:19:59.636473 kubelet[2574]: I0117 00:19:59.636256 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:19:59.639682 kubelet[2574]: I0117 00:19:59.636238 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:19:59.647853 kubelet[2574]: I0117 00:19:59.647828 2574 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:19:59.656469 kubelet[2574]: E0117 00:19:59.656453 2574 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:19:59.658629 kubelet[2574]: I0117 00:19:59.658407 2574 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:19:59.658629 kubelet[2574]: I0117 00:19:59.658418 2574 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:19:59.658629 kubelet[2574]: I0117 00:19:59.658478 2574 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:19:59.661892 kubelet[2574]: I0117 00:19:59.661861 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:19:59.664622 kubelet[2574]: I0117 00:19:59.664457 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:19:59.664622 kubelet[2574]: I0117 00:19:59.664477 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:19:59.664622 kubelet[2574]: I0117 00:19:59.664498 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:19:59.664622 kubelet[2574]: I0117 00:19:59.664508 2574 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:19:59.664622 kubelet[2574]: E0117 00:19:59.664577 2574 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:19:59.704457 kubelet[2574]: I0117 00:19:59.704425 2574 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:19:59.704632 kubelet[2574]: I0117 00:19:59.704585 2574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:19:59.704653 kubelet[2574]: I0117 00:19:59.704643 2574 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:59.704861 kubelet[2574]: I0117 00:19:59.704842 2574 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:19:59.704880 kubelet[2574]: I0117 00:19:59.704860 2574 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:19:59.704898 kubelet[2574]: I0117 00:19:59.704878 2574 policy_none.go:49] "None policy: Start" Jan 17 00:19:59.704898 kubelet[2574]: I0117 00:19:59.704891 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:19:59.704929 kubelet[2574]: I0117 00:19:59.704905 2574 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:19:59.705033 kubelet[2574]: I0117 00:19:59.705017 2574 state_mem.go:75] "Updated machine memory state" Jan 17 00:19:59.711420 kubelet[2574]: E0117 00:19:59.711400 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:19:59.711910 kubelet[2574]: I0117 00:19:59.711641 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:19:59.711910 kubelet[2574]: I0117 00:19:59.711659 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:19:59.713072 kubelet[2574]: E0117 00:19:59.713061 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:19:59.713706 kubelet[2574]: I0117 00:19:59.713659 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:19:59.766771 kubelet[2574]: I0117 00:19:59.765466 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.766771 kubelet[2574]: I0117 00:19:59.765558 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.766771 kubelet[2574]: I0117 00:19:59.765963 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.780358 kubelet[2574]: E0117 00:19:59.780291 2574 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.815901 kubelet[2574]: I0117 00:19:59.815868 2574 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.827769 kubelet[2574]: I0117 00:19:59.827722 2574 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.828133 kubelet[2574]: I0117 00:19:59.827962 2574 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849770 kubelet[2574]: I0117 00:19:59.849692 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/916e5e3fac68011d0778b83418892384-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"916e5e3fac68011d0778b83418892384\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849770 kubelet[2574]: I0117 00:19:59.849752 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849931 kubelet[2574]: I0117 00:19:59.849812 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849931 kubelet[2574]: I0117 00:19:59.849835 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849931 kubelet[2574]: I0117 00:19:59.849861 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bae799b8ea820963b5009dea577b6708-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"bae799b8ea820963b5009dea577b6708\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849931 kubelet[2574]: I0117 00:19:59.849882 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/916e5e3fac68011d0778b83418892384-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"916e5e3fac68011d0778b83418892384\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.849931 kubelet[2574]: I0117 00:19:59.849921 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/916e5e3fac68011d0778b83418892384-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"916e5e3fac68011d0778b83418892384\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.850147 kubelet[2574]: I0117 00:19:59.849946 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:19:59.850147 kubelet[2574]: I0117 00:19:59.849969 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ffd02a36cdd6e44b37f5cff74b11c6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1\" (UID: \"5ffd02a36cdd6e44b37f5cff74b11c6c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:00.610301 kubelet[2574]: I0117 00:20:00.609576 2574 apiserver.go:52] "Watching apiserver" Jan 17 00:20:00.637575 kubelet[2574]: I0117 00:20:00.637487 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:20:00.686810 kubelet[2574]: I0117 00:20:00.686120 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:00.701094 kubelet[2574]: E0117 00:20:00.701038 2574 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8c81c3eeb1\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:00.730848 kubelet[2574]: I0117 00:20:00.730780 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8c81c3eeb1" podStartSLOduration=2.730760092 podStartE2EDuration="2.730760092s" podCreationTimestamp="2026-01-17 00:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:00.717105435 +0000 UTC m=+1.194051714" watchObservedRunningTime="2026-01-17 00:20:00.730760092 +0000 UTC m=+1.207706371" Jan 17 00:20:00.743791 kubelet[2574]: I0117 00:20:00.743580 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8c81c3eeb1" podStartSLOduration=1.7435622579999999 podStartE2EDuration="1.743562258s" podCreationTimestamp="2026-01-17 00:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:00.731072032 +0000 UTC m=+1.208018321" watchObservedRunningTime="2026-01-17 00:20:00.743562258 +0000 UTC m=+1.220508537" Jan 17 00:20:00.758440 kubelet[2574]: I0117 00:20:00.755909 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8c81c3eeb1" podStartSLOduration=1.7558960529999998 podStartE2EDuration="1.755896053s" podCreationTimestamp="2026-01-17 00:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:00.743713198 +0000 UTC m=+1.220659477" watchObservedRunningTime="2026-01-17 00:20:00.755896053 +0000 UTC m=+1.232842342" Jan 17 00:20:04.901827 update_engine[1494]: I20260117 00:20:04.901665 1494 update_attempter.cc:509] Updating boot flags... Jan 17 00:20:05.013235 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2630) Jan 17 00:20:05.078030 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2634) Jan 17 00:20:05.118674 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2634) Jan 17 00:20:06.012559 kubelet[2574]: I0117 00:20:06.012484 2574 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:20:06.014119 containerd[1508]: time="2026-01-17T00:20:06.013912876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:20:06.015250 kubelet[2574]: I0117 00:20:06.014668 2574 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:20:07.059937 systemd[1]: Created slice kubepods-besteffort-pod6f1d09df_51fa_4216_a137_ee0d496c8320.slice - libcontainer container kubepods-besteffort-pod6f1d09df_51fa_4216_a137_ee0d496c8320.slice. Jan 17 00:20:07.100339 kubelet[2574]: I0117 00:20:07.100202 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f1d09df-51fa-4216-a137-ee0d496c8320-kube-proxy\") pod \"kube-proxy-28456\" (UID: \"6f1d09df-51fa-4216-a137-ee0d496c8320\") " pod="kube-system/kube-proxy-28456" Jan 17 00:20:07.100339 kubelet[2574]: I0117 00:20:07.100326 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f1d09df-51fa-4216-a137-ee0d496c8320-xtables-lock\") pod \"kube-proxy-28456\" (UID: \"6f1d09df-51fa-4216-a137-ee0d496c8320\") " pod="kube-system/kube-proxy-28456" Jan 17 00:20:07.101137 kubelet[2574]: I0117 00:20:07.100394 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f1d09df-51fa-4216-a137-ee0d496c8320-lib-modules\") pod \"kube-proxy-28456\" (UID: \"6f1d09df-51fa-4216-a137-ee0d496c8320\") " pod="kube-system/kube-proxy-28456" Jan 17 00:20:07.101137 kubelet[2574]: I0117 00:20:07.100435 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggkk\" (UniqueName: \"kubernetes.io/projected/6f1d09df-51fa-4216-a137-ee0d496c8320-kube-api-access-hggkk\") pod \"kube-proxy-28456\" (UID: \"6f1d09df-51fa-4216-a137-ee0d496c8320\") " pod="kube-system/kube-proxy-28456" Jan 17 00:20:07.297939 systemd[1]: Created slice kubepods-besteffort-pod6eb16350_20d6_408d_b797_f946cfc9d100.slice - libcontainer container kubepods-besteffort-pod6eb16350_20d6_408d_b797_f946cfc9d100.slice. Jan 17 00:20:07.371270 containerd[1508]: time="2026-01-17T00:20:07.371088632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28456,Uid:6f1d09df-51fa-4216-a137-ee0d496c8320,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:07.402436 kubelet[2574]: I0117 00:20:07.402228 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw7vn\" (UniqueName: \"kubernetes.io/projected/6eb16350-20d6-408d-b797-f946cfc9d100-kube-api-access-bw7vn\") pod \"tigera-operator-7dcd859c48-vvj92\" (UID: \"6eb16350-20d6-408d-b797-f946cfc9d100\") " pod="tigera-operator/tigera-operator-7dcd859c48-vvj92" Jan 17 00:20:07.402436 kubelet[2574]: I0117 00:20:07.402278 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6eb16350-20d6-408d-b797-f946cfc9d100-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vvj92\" (UID: \"6eb16350-20d6-408d-b797-f946cfc9d100\") " pod="tigera-operator/tigera-operator-7dcd859c48-vvj92" Jan 17 00:20:07.414978 containerd[1508]: time="2026-01-17T00:20:07.414370106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:07.414978 containerd[1508]: time="2026-01-17T00:20:07.414508126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:07.414978 containerd[1508]: time="2026-01-17T00:20:07.414533586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:07.415234 containerd[1508]: time="2026-01-17T00:20:07.414794977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:07.457875 systemd[1]: Started cri-containerd-fdc961b27233a91a1b38f4deca99d7db210d722cc388727cdf61b283291220e8.scope - libcontainer container fdc961b27233a91a1b38f4deca99d7db210d722cc388727cdf61b283291220e8. Jan 17 00:20:07.509818 containerd[1508]: time="2026-01-17T00:20:07.509724635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28456,Uid:6f1d09df-51fa-4216-a137-ee0d496c8320,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdc961b27233a91a1b38f4deca99d7db210d722cc388727cdf61b283291220e8\"" Jan 17 00:20:07.519669 containerd[1508]: time="2026-01-17T00:20:07.519007807Z" level=info msg="CreateContainer within sandbox \"fdc961b27233a91a1b38f4deca99d7db210d722cc388727cdf61b283291220e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:20:07.551573 containerd[1508]: time="2026-01-17T00:20:07.551507677Z" level=info msg="CreateContainer within sandbox \"fdc961b27233a91a1b38f4deca99d7db210d722cc388727cdf61b283291220e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f73ef4fa5a7dfdb60c9c4feaa7aeb2357f52172239766a67e0e84640400161fc\"" Jan 17 00:20:07.553826 containerd[1508]: time="2026-01-17T00:20:07.552464079Z" level=info msg="StartContainer for \"f73ef4fa5a7dfdb60c9c4feaa7aeb2357f52172239766a67e0e84640400161fc\"" Jan 17 00:20:07.600807 systemd[1]: Started cri-containerd-f73ef4fa5a7dfdb60c9c4feaa7aeb2357f52172239766a67e0e84640400161fc.scope - libcontainer container f73ef4fa5a7dfdb60c9c4feaa7aeb2357f52172239766a67e0e84640400161fc. Jan 17 00:20:07.603065 containerd[1508]: time="2026-01-17T00:20:07.603021182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vvj92,Uid:6eb16350-20d6-408d-b797-f946cfc9d100,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:20:07.661630 containerd[1508]: time="2026-01-17T00:20:07.659170992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:07.661630 containerd[1508]: time="2026-01-17T00:20:07.659259202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:07.661630 containerd[1508]: time="2026-01-17T00:20:07.659287002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:07.661630 containerd[1508]: time="2026-01-17T00:20:07.659528322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:07.666505 containerd[1508]: time="2026-01-17T00:20:07.666441501Z" level=info msg="StartContainer for \"f73ef4fa5a7dfdb60c9c4feaa7aeb2357f52172239766a67e0e84640400161fc\" returns successfully" Jan 17 00:20:07.698015 systemd[1]: Started cri-containerd-d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f.scope - libcontainer container d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f. Jan 17 00:20:07.726873 kubelet[2574]: I0117 00:20:07.726802 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-28456" podStartSLOduration=0.726785497 podStartE2EDuration="726.785497ms" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:07.726266406 +0000 UTC m=+8.203212675" watchObservedRunningTime="2026-01-17 00:20:07.726785497 +0000 UTC m=+8.203731766" Jan 17 00:20:07.756566 containerd[1508]: time="2026-01-17T00:20:07.756396124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vvj92,Uid:6eb16350-20d6-408d-b797-f946cfc9d100,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f\"" Jan 17 00:20:07.759214 containerd[1508]: time="2026-01-17T00:20:07.759184427Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:20:09.821863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655844264.mount: Deactivated successfully. Jan 17 00:20:12.251994 containerd[1508]: time="2026-01-17T00:20:12.251928965Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:12.253095 containerd[1508]: time="2026-01-17T00:20:12.253040438Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:20:12.254079 containerd[1508]: time="2026-01-17T00:20:12.254047662Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:12.256889 containerd[1508]: time="2026-01-17T00:20:12.256860547Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:12.258058 containerd[1508]: time="2026-01-17T00:20:12.257747101Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.498524084s" Jan 17 00:20:12.258058 containerd[1508]: time="2026-01-17T00:20:12.257783181Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:20:12.263626 containerd[1508]: time="2026-01-17T00:20:12.262268966Z" level=info msg="CreateContainer within sandbox \"d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:20:12.278208 containerd[1508]: time="2026-01-17T00:20:12.278152926Z" level=info msg="CreateContainer within sandbox \"d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc\"" Jan 17 00:20:12.279944 containerd[1508]: time="2026-01-17T00:20:12.278762262Z" level=info msg="StartContainer for \"4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc\"" Jan 17 00:20:12.314839 systemd[1]: Started cri-containerd-4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc.scope - libcontainer container 4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc. Jan 17 00:20:12.344459 containerd[1508]: time="2026-01-17T00:20:12.344382042Z" level=info msg="StartContainer for \"4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc\" returns successfully" Jan 17 00:20:17.671645 sudo[1715]: pam_unix(sudo:session): session closed for user root Jan 17 00:20:17.795603 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:17.797989 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:20:17.800861 systemd[1]: sshd@6-157.180.82.149:22-20.161.92.111:59906.service: Deactivated successfully. Jan 17 00:20:17.802369 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:20:17.804647 systemd[1]: session-7.scope: Consumed 6.571s CPU time, 156.6M memory peak, 0B memory swap peak. Jan 17 00:20:17.805837 systemd-logind[1487]: Removed session 7. Jan 17 00:20:21.899260 kubelet[2574]: I0117 00:20:21.899180 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vvj92" podStartSLOduration=10.398550197 podStartE2EDuration="14.89916041s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="2026-01-17 00:20:07.757819485 +0000 UTC m=+8.234765754" lastFinishedPulling="2026-01-17 00:20:12.258429718 +0000 UTC m=+12.735375967" observedRunningTime="2026-01-17 00:20:12.743574286 +0000 UTC m=+13.220520625" watchObservedRunningTime="2026-01-17 00:20:21.89916041 +0000 UTC m=+22.376106689" Jan 17 00:20:21.919816 systemd[1]: Created slice kubepods-besteffort-pod7b28f20b_77ac_412c_8a48_f0a47b89b97e.slice - libcontainer container kubepods-besteffort-pod7b28f20b_77ac_412c_8a48_f0a47b89b97e.slice. Jan 17 00:20:21.992987 kubelet[2574]: I0117 00:20:21.992684 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b28f20b-77ac-412c-8a48-f0a47b89b97e-tigera-ca-bundle\") pod \"calico-typha-7849bb96f8-v9kjf\" (UID: \"7b28f20b-77ac-412c-8a48-f0a47b89b97e\") " pod="calico-system/calico-typha-7849bb96f8-v9kjf" Jan 17 00:20:21.992987 kubelet[2574]: I0117 00:20:21.992870 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7b28f20b-77ac-412c-8a48-f0a47b89b97e-typha-certs\") pod \"calico-typha-7849bb96f8-v9kjf\" (UID: \"7b28f20b-77ac-412c-8a48-f0a47b89b97e\") " pod="calico-system/calico-typha-7849bb96f8-v9kjf" Jan 17 00:20:21.992987 kubelet[2574]: I0117 00:20:21.992898 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9l2f\" (UniqueName: \"kubernetes.io/projected/7b28f20b-77ac-412c-8a48-f0a47b89b97e-kube-api-access-d9l2f\") pod \"calico-typha-7849bb96f8-v9kjf\" (UID: \"7b28f20b-77ac-412c-8a48-f0a47b89b97e\") " pod="calico-system/calico-typha-7849bb96f8-v9kjf" Jan 17 00:20:22.096958 systemd[1]: Created slice kubepods-besteffort-pod6e5eb92f_c83d_4863_8113_c2752f67db51.slice - libcontainer container kubepods-besteffort-pod6e5eb92f_c83d_4863_8113_c2752f67db51.slice. Jan 17 00:20:22.195074 kubelet[2574]: I0117 00:20:22.194877 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-var-lib-calico\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195074 kubelet[2574]: I0117 00:20:22.194935 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-cni-net-dir\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195074 kubelet[2574]: I0117 00:20:22.194962 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e5eb92f-c83d-4863-8113-c2752f67db51-tigera-ca-bundle\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195074 kubelet[2574]: I0117 00:20:22.195016 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9wlz\" (UniqueName: \"kubernetes.io/projected/6e5eb92f-c83d-4863-8113-c2752f67db51-kube-api-access-r9wlz\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195074 kubelet[2574]: I0117 00:20:22.195043 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-var-run-calico\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195397 kubelet[2574]: I0117 00:20:22.195068 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-cni-log-dir\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195397 kubelet[2574]: I0117 00:20:22.195089 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-flexvol-driver-host\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195397 kubelet[2574]: I0117 00:20:22.195136 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e5eb92f-c83d-4863-8113-c2752f67db51-node-certs\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195397 kubelet[2574]: I0117 00:20:22.195159 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-lib-modules\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195397 kubelet[2574]: I0117 00:20:22.195180 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-xtables-lock\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195732 kubelet[2574]: I0117 00:20:22.195213 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-cni-bin-dir\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.195732 kubelet[2574]: I0117 00:20:22.195237 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e5eb92f-c83d-4863-8113-c2752f67db51-policysync\") pod \"calico-node-zws6q\" (UID: \"6e5eb92f-c83d-4863-8113-c2752f67db51\") " pod="calico-system/calico-node-zws6q" Jan 17 00:20:22.233203 containerd[1508]: time="2026-01-17T00:20:22.232537335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7849bb96f8-v9kjf,Uid:7b28f20b-77ac-412c-8a48-f0a47b89b97e,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:22.325928 containerd[1508]: time="2026-01-17T00:20:22.324087258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:22.332198 kubelet[2574]: E0117 00:20:22.331090 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.332198 kubelet[2574]: W0117 00:20:22.331107 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.332198 kubelet[2574]: E0117 00:20:22.331134 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.339056 containerd[1508]: time="2026-01-17T00:20:22.338551174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:22.339056 containerd[1508]: time="2026-01-17T00:20:22.338669603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:22.341355 containerd[1508]: time="2026-01-17T00:20:22.339380392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:22.345674 kubelet[2574]: E0117 00:20:22.345293 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:22.352914 kubelet[2574]: E0117 00:20:22.352797 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.352914 kubelet[2574]: W0117 00:20:22.352812 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.352914 kubelet[2574]: E0117 00:20:22.352823 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.372706 systemd[1]: Started cri-containerd-e386298006c8b08a914e6e083b492a832ed977cdf1a6fad2b9c15a9152d208a2.scope - libcontainer container e386298006c8b08a914e6e083b492a832ed977cdf1a6fad2b9c15a9152d208a2. Jan 17 00:20:22.382532 kubelet[2574]: E0117 00:20:22.382503 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.382790 kubelet[2574]: W0117 00:20:22.382749 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.382790 kubelet[2574]: E0117 00:20:22.382776 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.383087 kubelet[2574]: E0117 00:20:22.383069 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.383087 kubelet[2574]: W0117 00:20:22.383081 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.383146 kubelet[2574]: E0117 00:20:22.383088 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.383531 kubelet[2574]: E0117 00:20:22.383511 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.383531 kubelet[2574]: W0117 00:20:22.383524 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.383531 kubelet[2574]: E0117 00:20:22.383531 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.384095 kubelet[2574]: E0117 00:20:22.384075 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.384095 kubelet[2574]: W0117 00:20:22.384088 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.384095 kubelet[2574]: E0117 00:20:22.384095 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.384476 kubelet[2574]: E0117 00:20:22.384451 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.384476 kubelet[2574]: W0117 00:20:22.384467 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.384525 kubelet[2574]: E0117 00:20:22.384476 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.384877 kubelet[2574]: E0117 00:20:22.384827 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.384877 kubelet[2574]: W0117 00:20:22.384845 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.385017 kubelet[2574]: E0117 00:20:22.384866 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.385241 kubelet[2574]: E0117 00:20:22.385232 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.385364 kubelet[2574]: W0117 00:20:22.385265 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.385364 kubelet[2574]: E0117 00:20:22.385273 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.385620 kubelet[2574]: E0117 00:20:22.385578 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.385620 kubelet[2574]: W0117 00:20:22.385586 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.385620 kubelet[2574]: E0117 00:20:22.385604 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.385969 kubelet[2574]: E0117 00:20:22.385961 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.386070 kubelet[2574]: W0117 00:20:22.386021 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.386070 kubelet[2574]: E0117 00:20:22.386030 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.386408 kubelet[2574]: E0117 00:20:22.386324 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.386408 kubelet[2574]: W0117 00:20:22.386333 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.386408 kubelet[2574]: E0117 00:20:22.386339 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.386726 kubelet[2574]: E0117 00:20:22.386633 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.386726 kubelet[2574]: W0117 00:20:22.386642 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.386726 kubelet[2574]: E0117 00:20:22.386648 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.386981 kubelet[2574]: E0117 00:20:22.386973 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.387040 kubelet[2574]: W0117 00:20:22.387008 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.387040 kubelet[2574]: E0117 00:20:22.387016 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.387329 kubelet[2574]: E0117 00:20:22.387286 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.387329 kubelet[2574]: W0117 00:20:22.387294 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.387329 kubelet[2574]: E0117 00:20:22.387300 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.387724 kubelet[2574]: E0117 00:20:22.387662 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.387724 kubelet[2574]: W0117 00:20:22.387670 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.387724 kubelet[2574]: E0117 00:20:22.387677 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.388085 kubelet[2574]: E0117 00:20:22.387998 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.388085 kubelet[2574]: W0117 00:20:22.388005 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.388085 kubelet[2574]: E0117 00:20:22.388012 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.388320 kubelet[2574]: E0117 00:20:22.388312 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.388431 kubelet[2574]: W0117 00:20:22.388358 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.388431 kubelet[2574]: E0117 00:20:22.388367 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.388725 kubelet[2574]: E0117 00:20:22.388678 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.388725 kubelet[2574]: W0117 00:20:22.388685 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.388725 kubelet[2574]: E0117 00:20:22.388692 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.388994 kubelet[2574]: E0117 00:20:22.388936 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.388994 kubelet[2574]: W0117 00:20:22.388943 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.388994 kubelet[2574]: E0117 00:20:22.388950 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.389374 kubelet[2574]: E0117 00:20:22.389314 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.389374 kubelet[2574]: W0117 00:20:22.389323 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.389374 kubelet[2574]: E0117 00:20:22.389329 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.389820 kubelet[2574]: E0117 00:20:22.389735 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.389820 kubelet[2574]: W0117 00:20:22.389743 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.389820 kubelet[2574]: E0117 00:20:22.389750 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.399223 kubelet[2574]: E0117 00:20:22.399197 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.399223 kubelet[2574]: W0117 00:20:22.399221 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.399304 kubelet[2574]: E0117 00:20:22.399237 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.399304 kubelet[2574]: I0117 00:20:22.399276 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/669c9dd2-93ed-4be5-8b4c-834706d32358-socket-dir\") pod \"csi-node-driver-2d8j7\" (UID: \"669c9dd2-93ed-4be5-8b4c-834706d32358\") " pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:22.399542 kubelet[2574]: E0117 00:20:22.399526 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.399542 kubelet[2574]: W0117 00:20:22.399539 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.399578 kubelet[2574]: E0117 00:20:22.399547 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.399578 kubelet[2574]: I0117 00:20:22.399565 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/669c9dd2-93ed-4be5-8b4c-834706d32358-varrun\") pod \"csi-node-driver-2d8j7\" (UID: \"669c9dd2-93ed-4be5-8b4c-834706d32358\") " pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:22.399821 kubelet[2574]: E0117 00:20:22.399805 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.399821 kubelet[2574]: W0117 00:20:22.399818 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.399946 kubelet[2574]: E0117 00:20:22.399829 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.399946 kubelet[2574]: I0117 00:20:22.399851 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/669c9dd2-93ed-4be5-8b4c-834706d32358-kubelet-dir\") pod \"csi-node-driver-2d8j7\" (UID: \"669c9dd2-93ed-4be5-8b4c-834706d32358\") " pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:22.400110 kubelet[2574]: E0117 00:20:22.400090 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.400110 kubelet[2574]: W0117 00:20:22.400107 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.400146 kubelet[2574]: E0117 00:20:22.400114 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.400146 kubelet[2574]: I0117 00:20:22.400126 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfrw\" (UniqueName: \"kubernetes.io/projected/669c9dd2-93ed-4be5-8b4c-834706d32358-kube-api-access-vkfrw\") pod \"csi-node-driver-2d8j7\" (UID: \"669c9dd2-93ed-4be5-8b4c-834706d32358\") " pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:22.400407 kubelet[2574]: E0117 00:20:22.400321 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.400407 kubelet[2574]: W0117 00:20:22.400332 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.400407 kubelet[2574]: E0117 00:20:22.400338 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.400407 kubelet[2574]: I0117 00:20:22.400361 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/669c9dd2-93ed-4be5-8b4c-834706d32358-registration-dir\") pod \"csi-node-driver-2d8j7\" (UID: \"669c9dd2-93ed-4be5-8b4c-834706d32358\") " pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:22.400618 kubelet[2574]: E0117 00:20:22.400586 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.400618 kubelet[2574]: W0117 00:20:22.400615 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.400656 kubelet[2574]: E0117 00:20:22.400623 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.400868 kubelet[2574]: E0117 00:20:22.400852 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.400868 kubelet[2574]: W0117 00:20:22.400865 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.400905 kubelet[2574]: E0117 00:20:22.400873 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.401132 kubelet[2574]: E0117 00:20:22.401116 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.401132 kubelet[2574]: W0117 00:20:22.401128 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.401165 kubelet[2574]: E0117 00:20:22.401136 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.401362 kubelet[2574]: E0117 00:20:22.401344 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.401362 kubelet[2574]: W0117 00:20:22.401356 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.401362 kubelet[2574]: E0117 00:20:22.401362 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.401642 kubelet[2574]: E0117 00:20:22.401626 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.401642 kubelet[2574]: W0117 00:20:22.401636 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.401692 kubelet[2574]: E0117 00:20:22.401642 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.401922 kubelet[2574]: E0117 00:20:22.401900 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.401922 kubelet[2574]: W0117 00:20:22.401914 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.401922 kubelet[2574]: E0117 00:20:22.401921 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.402165 kubelet[2574]: E0117 00:20:22.402152 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.402165 kubelet[2574]: W0117 00:20:22.402163 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.402203 kubelet[2574]: E0117 00:20:22.402170 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.402386 kubelet[2574]: E0117 00:20:22.402371 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.402386 kubelet[2574]: W0117 00:20:22.402384 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.402386 kubelet[2574]: E0117 00:20:22.402391 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.403114 kubelet[2574]: E0117 00:20:22.402613 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.403114 kubelet[2574]: W0117 00:20:22.402622 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.403114 kubelet[2574]: E0117 00:20:22.402628 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.403114 kubelet[2574]: E0117 00:20:22.402806 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.403114 kubelet[2574]: W0117 00:20:22.402812 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.403114 kubelet[2574]: E0117 00:20:22.402818 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.406131 containerd[1508]: time="2026-01-17T00:20:22.405876444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zws6q,Uid:6e5eb92f-c83d-4863-8113-c2752f67db51,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:22.416349 containerd[1508]: time="2026-01-17T00:20:22.416321269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7849bb96f8-v9kjf,Uid:7b28f20b-77ac-412c-8a48-f0a47b89b97e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e386298006c8b08a914e6e083b492a832ed977cdf1a6fad2b9c15a9152d208a2\"" Jan 17 00:20:22.421207 containerd[1508]: time="2026-01-17T00:20:22.421152538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:20:22.435649 containerd[1508]: time="2026-01-17T00:20:22.435014556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:22.435649 containerd[1508]: time="2026-01-17T00:20:22.435061405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:22.435649 containerd[1508]: time="2026-01-17T00:20:22.435069575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:22.435649 containerd[1508]: time="2026-01-17T00:20:22.435147175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:22.460818 systemd[1]: Started cri-containerd-7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c.scope - libcontainer container 7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c. Jan 17 00:20:22.495945 containerd[1508]: time="2026-01-17T00:20:22.495843641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zws6q,Uid:6e5eb92f-c83d-4863-8113-c2752f67db51,Namespace:calico-system,Attempt:0,} returns sandbox id \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\"" Jan 17 00:20:22.500904 kubelet[2574]: E0117 00:20:22.500872 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.500904 kubelet[2574]: W0117 00:20:22.500900 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.500990 kubelet[2574]: E0117 00:20:22.500915 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.502355 kubelet[2574]: E0117 00:20:22.502332 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.502355 kubelet[2574]: W0117 00:20:22.502345 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.502355 kubelet[2574]: E0117 00:20:22.502352 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.503020 kubelet[2574]: E0117 00:20:22.503001 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.503020 kubelet[2574]: W0117 00:20:22.503013 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.503020 kubelet[2574]: E0117 00:20:22.503021 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.503319 kubelet[2574]: E0117 00:20:22.503299 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.503319 kubelet[2574]: W0117 00:20:22.503311 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.503319 kubelet[2574]: E0117 00:20:22.503318 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.503715 kubelet[2574]: E0117 00:20:22.503621 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.503715 kubelet[2574]: W0117 00:20:22.503632 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.503715 kubelet[2574]: E0117 00:20:22.503639 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.504114 kubelet[2574]: E0117 00:20:22.504078 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.504114 kubelet[2574]: W0117 00:20:22.504092 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.504114 kubelet[2574]: E0117 00:20:22.504099 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.504410 kubelet[2574]: E0117 00:20:22.504389 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.504410 kubelet[2574]: W0117 00:20:22.504401 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.504410 kubelet[2574]: E0117 00:20:22.504407 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.504882 kubelet[2574]: E0117 00:20:22.504779 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.504882 kubelet[2574]: W0117 00:20:22.504791 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.504882 kubelet[2574]: E0117 00:20:22.504798 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.505117 kubelet[2574]: E0117 00:20:22.505099 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.505117 kubelet[2574]: W0117 00:20:22.505112 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.505151 kubelet[2574]: E0117 00:20:22.505119 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.505769 kubelet[2574]: E0117 00:20:22.505747 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.505769 kubelet[2574]: W0117 00:20:22.505760 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.505769 kubelet[2574]: E0117 00:20:22.505767 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.506070 kubelet[2574]: E0117 00:20:22.506051 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.506070 kubelet[2574]: W0117 00:20:22.506062 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.506070 kubelet[2574]: E0117 00:20:22.506069 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.506331 kubelet[2574]: E0117 00:20:22.506312 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.506331 kubelet[2574]: W0117 00:20:22.506324 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.506331 kubelet[2574]: E0117 00:20:22.506331 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.506707 kubelet[2574]: E0117 00:20:22.506679 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.506707 kubelet[2574]: W0117 00:20:22.506690 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.506707 kubelet[2574]: E0117 00:20:22.506697 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.506926 kubelet[2574]: E0117 00:20:22.506907 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.506926 kubelet[2574]: W0117 00:20:22.506919 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.506926 kubelet[2574]: E0117 00:20:22.506925 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.507117 kubelet[2574]: E0117 00:20:22.507099 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.507117 kubelet[2574]: W0117 00:20:22.507111 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.507117 kubelet[2574]: E0117 00:20:22.507116 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.507333 kubelet[2574]: E0117 00:20:22.507313 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.507333 kubelet[2574]: W0117 00:20:22.507323 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.507333 kubelet[2574]: E0117 00:20:22.507329 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.507566 kubelet[2574]: E0117 00:20:22.507547 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.507566 kubelet[2574]: W0117 00:20:22.507558 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.507566 kubelet[2574]: E0117 00:20:22.507564 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.507825 kubelet[2574]: E0117 00:20:22.507807 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.507825 kubelet[2574]: W0117 00:20:22.507818 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.507825 kubelet[2574]: E0117 00:20:22.507824 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.508036 kubelet[2574]: E0117 00:20:22.508018 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.508036 kubelet[2574]: W0117 00:20:22.508030 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.508036 kubelet[2574]: E0117 00:20:22.508036 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.508287 kubelet[2574]: E0117 00:20:22.508268 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.508287 kubelet[2574]: W0117 00:20:22.508279 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.508287 kubelet[2574]: E0117 00:20:22.508286 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.508945 kubelet[2574]: E0117 00:20:22.508703 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.508945 kubelet[2574]: W0117 00:20:22.508713 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.508945 kubelet[2574]: E0117 00:20:22.508720 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.509019 kubelet[2574]: E0117 00:20:22.508991 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.509019 kubelet[2574]: W0117 00:20:22.508998 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.509019 kubelet[2574]: E0117 00:20:22.509004 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.509271 kubelet[2574]: E0117 00:20:22.509251 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.509271 kubelet[2574]: W0117 00:20:22.509262 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.509271 kubelet[2574]: E0117 00:20:22.509269 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.509630 kubelet[2574]: E0117 00:20:22.509508 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.509630 kubelet[2574]: W0117 00:20:22.509517 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.509630 kubelet[2574]: E0117 00:20:22.509523 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.509867 kubelet[2574]: E0117 00:20:22.509844 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.509867 kubelet[2574]: W0117 00:20:22.509856 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.509867 kubelet[2574]: E0117 00:20:22.509863 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:22.516154 kubelet[2574]: E0117 00:20:22.516132 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:22.516154 kubelet[2574]: W0117 00:20:22.516145 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:22.516154 kubelet[2574]: E0117 00:20:22.516153 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:24.268728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417615297.mount: Deactivated successfully. Jan 17 00:20:24.665282 kubelet[2574]: E0117 00:20:24.665132 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:25.391946 containerd[1508]: time="2026-01-17T00:20:25.391883124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:25.393042 containerd[1508]: time="2026-01-17T00:20:25.392925952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:20:25.394627 containerd[1508]: time="2026-01-17T00:20:25.393762131Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:25.395926 containerd[1508]: time="2026-01-17T00:20:25.395385978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:25.395926 containerd[1508]: time="2026-01-17T00:20:25.395830737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.97412159s" Jan 17 00:20:25.395926 containerd[1508]: time="2026-01-17T00:20:25.395857777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:20:25.397373 containerd[1508]: time="2026-01-17T00:20:25.397243894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:20:25.419216 containerd[1508]: time="2026-01-17T00:20:25.419173877Z" level=info msg="CreateContainer within sandbox \"e386298006c8b08a914e6e083b492a832ed977cdf1a6fad2b9c15a9152d208a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:20:25.433690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115818675.mount: Deactivated successfully. Jan 17 00:20:25.437355 containerd[1508]: time="2026-01-17T00:20:25.437325756Z" level=info msg="CreateContainer within sandbox \"e386298006c8b08a914e6e083b492a832ed977cdf1a6fad2b9c15a9152d208a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"617d279a06f784fb1a46a0ebb3898e25a12672cffd618ae814bb42e9aa2df66a\"" Jan 17 00:20:25.439408 containerd[1508]: time="2026-01-17T00:20:25.438497503Z" level=info msg="StartContainer for \"617d279a06f784fb1a46a0ebb3898e25a12672cffd618ae814bb42e9aa2df66a\"" Jan 17 00:20:25.461706 systemd[1]: Started cri-containerd-617d279a06f784fb1a46a0ebb3898e25a12672cffd618ae814bb42e9aa2df66a.scope - libcontainer container 617d279a06f784fb1a46a0ebb3898e25a12672cffd618ae814bb42e9aa2df66a. Jan 17 00:20:25.500848 containerd[1508]: time="2026-01-17T00:20:25.500820025Z" level=info msg="StartContainer for \"617d279a06f784fb1a46a0ebb3898e25a12672cffd618ae814bb42e9aa2df66a\" returns successfully" Jan 17 00:20:25.774296 kubelet[2574]: I0117 00:20:25.774229 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7849bb96f8-v9kjf" podStartSLOduration=1.796568451 podStartE2EDuration="4.774178342s" podCreationTimestamp="2026-01-17 00:20:21 +0000 UTC" firstStartedPulling="2026-01-17 00:20:22.418878204 +0000 UTC m=+22.895824473" lastFinishedPulling="2026-01-17 00:20:25.396488115 +0000 UTC m=+25.873434364" observedRunningTime="2026-01-17 00:20:25.773829182 +0000 UTC m=+26.250775451" watchObservedRunningTime="2026-01-17 00:20:25.774178342 +0000 UTC m=+26.251124611" Jan 17 00:20:25.812469 kubelet[2574]: E0117 00:20:25.812414 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.812469 kubelet[2574]: W0117 00:20:25.812451 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.812614 kubelet[2574]: E0117 00:20:25.812496 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.813250 kubelet[2574]: E0117 00:20:25.813223 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.813288 kubelet[2574]: W0117 00:20:25.813250 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.813288 kubelet[2574]: E0117 00:20:25.813268 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.813832 kubelet[2574]: E0117 00:20:25.813810 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.813832 kubelet[2574]: W0117 00:20:25.813831 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.813904 kubelet[2574]: E0117 00:20:25.813843 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.814352 kubelet[2574]: E0117 00:20:25.814321 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.814352 kubelet[2574]: W0117 00:20:25.814337 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.814352 kubelet[2574]: E0117 00:20:25.814350 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.815069 kubelet[2574]: E0117 00:20:25.815046 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.815069 kubelet[2574]: W0117 00:20:25.815063 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.815132 kubelet[2574]: E0117 00:20:25.815077 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.815701 kubelet[2574]: E0117 00:20:25.815664 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.815701 kubelet[2574]: W0117 00:20:25.815687 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.815701 kubelet[2574]: E0117 00:20:25.815699 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.816177 kubelet[2574]: E0117 00:20:25.816143 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.816177 kubelet[2574]: W0117 00:20:25.816164 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.816177 kubelet[2574]: E0117 00:20:25.816176 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.816864 kubelet[2574]: E0117 00:20:25.816837 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.816864 kubelet[2574]: W0117 00:20:25.816860 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.817386 kubelet[2574]: E0117 00:20:25.816873 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.817564 kubelet[2574]: E0117 00:20:25.817537 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.817586 kubelet[2574]: W0117 00:20:25.817565 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.817748 kubelet[2574]: E0117 00:20:25.817583 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.818170 kubelet[2574]: E0117 00:20:25.818128 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.818170 kubelet[2574]: W0117 00:20:25.818148 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.818170 kubelet[2574]: E0117 00:20:25.818162 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.818929 kubelet[2574]: E0117 00:20:25.818900 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.818929 kubelet[2574]: W0117 00:20:25.818919 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.818992 kubelet[2574]: E0117 00:20:25.818932 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.819373 kubelet[2574]: E0117 00:20:25.819333 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.819373 kubelet[2574]: W0117 00:20:25.819349 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.819373 kubelet[2574]: E0117 00:20:25.819362 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.819818 kubelet[2574]: E0117 00:20:25.819790 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.819818 kubelet[2574]: W0117 00:20:25.819813 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.819965 kubelet[2574]: E0117 00:20:25.819827 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.820649 kubelet[2574]: E0117 00:20:25.820578 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.820649 kubelet[2574]: W0117 00:20:25.820645 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.820716 kubelet[2574]: E0117 00:20:25.820663 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.821701 kubelet[2574]: E0117 00:20:25.821670 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.821701 kubelet[2574]: W0117 00:20:25.821699 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.821775 kubelet[2574]: E0117 00:20:25.821714 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.831982 kubelet[2574]: E0117 00:20:25.831364 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.831982 kubelet[2574]: W0117 00:20:25.831387 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.831982 kubelet[2574]: E0117 00:20:25.831403 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.832107 kubelet[2574]: E0117 00:20:25.832098 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.832131 kubelet[2574]: W0117 00:20:25.832112 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.832131 kubelet[2574]: E0117 00:20:25.832125 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.832757 kubelet[2574]: E0117 00:20:25.832736 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.832757 kubelet[2574]: W0117 00:20:25.832754 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.832834 kubelet[2574]: E0117 00:20:25.832769 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.833728 kubelet[2574]: E0117 00:20:25.833696 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.833728 kubelet[2574]: W0117 00:20:25.833721 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.833814 kubelet[2574]: E0117 00:20:25.833736 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.834186 kubelet[2574]: E0117 00:20:25.834119 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.834186 kubelet[2574]: W0117 00:20:25.834178 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.834236 kubelet[2574]: E0117 00:20:25.834190 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.834639 kubelet[2574]: E0117 00:20:25.834588 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.834639 kubelet[2574]: W0117 00:20:25.834637 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.834700 kubelet[2574]: E0117 00:20:25.834649 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.835146 kubelet[2574]: E0117 00:20:25.835121 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.835146 kubelet[2574]: W0117 00:20:25.835141 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.835198 kubelet[2574]: E0117 00:20:25.835154 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.836178 kubelet[2574]: E0117 00:20:25.836144 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.836210 kubelet[2574]: W0117 00:20:25.836175 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.836210 kubelet[2574]: E0117 00:20:25.836199 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.836697 kubelet[2574]: E0117 00:20:25.836670 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.836697 kubelet[2574]: W0117 00:20:25.836692 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.836748 kubelet[2574]: E0117 00:20:25.836705 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.837768 kubelet[2574]: E0117 00:20:25.837741 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.837768 kubelet[2574]: W0117 00:20:25.837762 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.837870 kubelet[2574]: E0117 00:20:25.837776 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.839029 kubelet[2574]: E0117 00:20:25.838995 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.839029 kubelet[2574]: W0117 00:20:25.839017 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.839088 kubelet[2574]: E0117 00:20:25.839031 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.839486 kubelet[2574]: E0117 00:20:25.839447 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.839486 kubelet[2574]: W0117 00:20:25.839481 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.839534 kubelet[2574]: E0117 00:20:25.839495 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.840015 kubelet[2574]: E0117 00:20:25.839975 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.840015 kubelet[2574]: W0117 00:20:25.840013 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.840073 kubelet[2574]: E0117 00:20:25.840027 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.840464 kubelet[2574]: E0117 00:20:25.840424 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.840464 kubelet[2574]: W0117 00:20:25.840444 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.840522 kubelet[2574]: E0117 00:20:25.840471 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.841208 kubelet[2574]: E0117 00:20:25.841181 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.841208 kubelet[2574]: W0117 00:20:25.841203 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.841266 kubelet[2574]: E0117 00:20:25.841218 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.843095 kubelet[2574]: E0117 00:20:25.843064 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.843095 kubelet[2574]: W0117 00:20:25.843091 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.843167 kubelet[2574]: E0117 00:20:25.843106 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.843550 kubelet[2574]: E0117 00:20:25.843526 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.843550 kubelet[2574]: W0117 00:20:25.843547 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.843615 kubelet[2574]: E0117 00:20:25.843559 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:25.844210 kubelet[2574]: E0117 00:20:25.844191 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:25.844210 kubelet[2574]: W0117 00:20:25.844208 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:25.844311 kubelet[2574]: E0117 00:20:25.844221 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.665695 kubelet[2574]: E0117 00:20:26.665575 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:26.766949 kubelet[2574]: I0117 00:20:26.766876 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:26.829795 kubelet[2574]: E0117 00:20:26.829372 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.829795 kubelet[2574]: W0117 00:20:26.829407 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.829795 kubelet[2574]: E0117 00:20:26.829437 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.833675 kubelet[2574]: E0117 00:20:26.832778 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.833675 kubelet[2574]: W0117 00:20:26.832804 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.833675 kubelet[2574]: E0117 00:20:26.832829 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.833675 kubelet[2574]: E0117 00:20:26.833424 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.833675 kubelet[2574]: W0117 00:20:26.833452 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.833675 kubelet[2574]: E0117 00:20:26.833514 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.834823 kubelet[2574]: E0117 00:20:26.834397 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.834823 kubelet[2574]: W0117 00:20:26.834418 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.834823 kubelet[2574]: E0117 00:20:26.834435 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.835990 kubelet[2574]: E0117 00:20:26.835725 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.835990 kubelet[2574]: W0117 00:20:26.835753 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.835990 kubelet[2574]: E0117 00:20:26.835787 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.839916 kubelet[2574]: E0117 00:20:26.837752 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.839916 kubelet[2574]: W0117 00:20:26.837775 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.839916 kubelet[2574]: E0117 00:20:26.837793 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.839916 kubelet[2574]: E0117 00:20:26.839723 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.839916 kubelet[2574]: W0117 00:20:26.839744 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.839916 kubelet[2574]: E0117 00:20:26.839763 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.841154 kubelet[2574]: E0117 00:20:26.840972 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.841154 kubelet[2574]: W0117 00:20:26.841146 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.841289 kubelet[2574]: E0117 00:20:26.841179 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.843813 kubelet[2574]: E0117 00:20:26.843435 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.843813 kubelet[2574]: W0117 00:20:26.843494 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.843813 kubelet[2574]: E0117 00:20:26.843534 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.844494 kubelet[2574]: E0117 00:20:26.844305 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.844494 kubelet[2574]: W0117 00:20:26.844335 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.844494 kubelet[2574]: E0117 00:20:26.844360 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.846792 kubelet[2574]: E0117 00:20:26.846447 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.846792 kubelet[2574]: W0117 00:20:26.846499 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.846792 kubelet[2574]: E0117 00:20:26.846524 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.847795 kubelet[2574]: E0117 00:20:26.847738 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.847795 kubelet[2574]: W0117 00:20:26.847770 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.847795 kubelet[2574]: E0117 00:20:26.847793 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.848453 kubelet[2574]: E0117 00:20:26.848408 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.848453 kubelet[2574]: W0117 00:20:26.848440 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.848573 kubelet[2574]: E0117 00:20:26.848478 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.849077 kubelet[2574]: E0117 00:20:26.849037 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.849077 kubelet[2574]: W0117 00:20:26.849061 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.849077 kubelet[2574]: E0117 00:20:26.849077 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.849635 kubelet[2574]: E0117 00:20:26.849556 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.849635 kubelet[2574]: W0117 00:20:26.849585 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.849806 kubelet[2574]: E0117 00:20:26.849643 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.851775 kubelet[2574]: E0117 00:20:26.851722 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.851775 kubelet[2574]: W0117 00:20:26.851760 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.851775 kubelet[2574]: E0117 00:20:26.851778 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.854891 kubelet[2574]: E0117 00:20:26.854842 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.854891 kubelet[2574]: W0117 00:20:26.854881 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.855056 kubelet[2574]: E0117 00:20:26.854920 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.855848 kubelet[2574]: E0117 00:20:26.855807 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.855918 kubelet[2574]: W0117 00:20:26.855852 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.855918 kubelet[2574]: E0117 00:20:26.855886 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.859446 kubelet[2574]: E0117 00:20:26.859397 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.859527 kubelet[2574]: W0117 00:20:26.859442 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.859527 kubelet[2574]: E0117 00:20:26.859500 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.860250 kubelet[2574]: E0117 00:20:26.860195 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.860305 kubelet[2574]: W0117 00:20:26.860245 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.860305 kubelet[2574]: E0117 00:20:26.860287 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.861835 kubelet[2574]: E0117 00:20:26.861738 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.861835 kubelet[2574]: W0117 00:20:26.861781 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.861835 kubelet[2574]: E0117 00:20:26.861810 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.864505 kubelet[2574]: E0117 00:20:26.863995 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.864505 kubelet[2574]: W0117 00:20:26.864020 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.864505 kubelet[2574]: E0117 00:20:26.864049 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.866961 kubelet[2574]: E0117 00:20:26.866936 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.867072 kubelet[2574]: W0117 00:20:26.867047 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.867174 kubelet[2574]: E0117 00:20:26.867156 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.867935 kubelet[2574]: E0117 00:20:26.867911 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.868130 kubelet[2574]: W0117 00:20:26.868069 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.868245 kubelet[2574]: E0117 00:20:26.868228 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.870380 kubelet[2574]: E0117 00:20:26.870343 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.870505 kubelet[2574]: W0117 00:20:26.870485 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.870725 kubelet[2574]: E0117 00:20:26.870705 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.873207 kubelet[2574]: E0117 00:20:26.872959 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.873207 kubelet[2574]: W0117 00:20:26.872984 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.873207 kubelet[2574]: E0117 00:20:26.873003 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.874127 kubelet[2574]: E0117 00:20:26.874102 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.874822 kubelet[2574]: W0117 00:20:26.874218 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.874822 kubelet[2574]: E0117 00:20:26.874241 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.875729 kubelet[2574]: E0117 00:20:26.875705 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.875888 kubelet[2574]: W0117 00:20:26.875850 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.875967 kubelet[2574]: E0117 00:20:26.875948 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.876458 kubelet[2574]: E0117 00:20:26.876436 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.876581 kubelet[2574]: W0117 00:20:26.876564 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.876716 kubelet[2574]: E0117 00:20:26.876696 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.877286 kubelet[2574]: E0117 00:20:26.877256 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.877286 kubelet[2574]: W0117 00:20:26.877274 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.877286 kubelet[2574]: E0117 00:20:26.877292 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.877798 kubelet[2574]: E0117 00:20:26.877765 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.877798 kubelet[2574]: W0117 00:20:26.877791 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.878174 kubelet[2574]: E0117 00:20:26.877807 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.878312 kubelet[2574]: E0117 00:20:26.878279 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.878312 kubelet[2574]: W0117 00:20:26.878294 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.878312 kubelet[2574]: E0117 00:20:26.878309 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:26.879438 kubelet[2574]: E0117 00:20:26.879388 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:26.879438 kubelet[2574]: W0117 00:20:26.879413 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:26.879438 kubelet[2574]: E0117 00:20:26.879429 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:27.353838 containerd[1508]: time="2026-01-17T00:20:27.353716259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:27.356036 containerd[1508]: time="2026-01-17T00:20:27.355949086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:20:27.357492 containerd[1508]: time="2026-01-17T00:20:27.357445274Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:27.361577 containerd[1508]: time="2026-01-17T00:20:27.361498719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:27.363790 containerd[1508]: time="2026-01-17T00:20:27.362850906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.965579562s" Jan 17 00:20:27.363790 containerd[1508]: time="2026-01-17T00:20:27.362903496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:20:27.371512 containerd[1508]: time="2026-01-17T00:20:27.371419825Z" level=info msg="CreateContainer within sandbox \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:20:27.395436 containerd[1508]: time="2026-01-17T00:20:27.395373762Z" level=info msg="CreateContainer within sandbox \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8\"" Jan 17 00:20:27.396225 containerd[1508]: time="2026-01-17T00:20:27.396172551Z" level=info msg="StartContainer for \"c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8\"" Jan 17 00:20:27.459846 systemd[1]: Started cri-containerd-c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8.scope - libcontainer container c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8. Jan 17 00:20:27.526663 containerd[1508]: time="2026-01-17T00:20:27.524824224Z" level=info msg="StartContainer for \"c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8\" returns successfully" Jan 17 00:20:27.551218 systemd[1]: cri-containerd-c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8.scope: Deactivated successfully. Jan 17 00:20:27.585138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8-rootfs.mount: Deactivated successfully. Jan 17 00:20:27.711677 containerd[1508]: time="2026-01-17T00:20:27.711522038Z" level=info msg="shim disconnected" id=c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8 namespace=k8s.io Jan 17 00:20:27.712253 containerd[1508]: time="2026-01-17T00:20:27.711648338Z" level=warning msg="cleaning up after shim disconnected" id=c7e368d4cd8f5a11c07f238a19aa5b72a28956fe1bd6e511023e6a78edacbfb8 namespace=k8s.io Jan 17 00:20:27.712253 containerd[1508]: time="2026-01-17T00:20:27.711996288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:27.776047 containerd[1508]: time="2026-01-17T00:20:27.775953221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:20:28.665779 kubelet[2574]: E0117 00:20:28.665685 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:30.665734 kubelet[2574]: E0117 00:20:30.665193 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:31.907773 containerd[1508]: time="2026-01-17T00:20:31.907733405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:31.908814 containerd[1508]: time="2026-01-17T00:20:31.908727134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:20:31.910440 containerd[1508]: time="2026-01-17T00:20:31.909568923Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:31.911614 containerd[1508]: time="2026-01-17T00:20:31.911315693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:31.912081 containerd[1508]: time="2026-01-17T00:20:31.911734412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.135733041s" Jan 17 00:20:31.912081 containerd[1508]: time="2026-01-17T00:20:31.911755132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:20:31.914999 containerd[1508]: time="2026-01-17T00:20:31.914975749Z" level=info msg="CreateContainer within sandbox \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:20:31.932529 containerd[1508]: time="2026-01-17T00:20:31.932493256Z" level=info msg="CreateContainer within sandbox \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d\"" Jan 17 00:20:31.933026 containerd[1508]: time="2026-01-17T00:20:31.933010976Z" level=info msg="StartContainer for \"9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d\"" Jan 17 00:20:31.955303 systemd[1]: run-containerd-runc-k8s.io-9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d-runc.VTD0Xc.mount: Deactivated successfully. Jan 17 00:20:31.959713 systemd[1]: Started cri-containerd-9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d.scope - libcontainer container 9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d. Jan 17 00:20:31.984191 containerd[1508]: time="2026-01-17T00:20:31.984106366Z" level=info msg="StartContainer for \"9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d\" returns successfully" Jan 17 00:20:32.523690 containerd[1508]: time="2026-01-17T00:20:32.523575514Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:20:32.531046 systemd[1]: cri-containerd-9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d.scope: Deactivated successfully. Jan 17 00:20:32.566225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d-rootfs.mount: Deactivated successfully. Jan 17 00:20:32.618927 kubelet[2574]: I0117 00:20:32.618753 2574 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:20:32.678141 containerd[1508]: time="2026-01-17T00:20:32.675420336Z" level=info msg="shim disconnected" id=9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d namespace=k8s.io Jan 17 00:20:32.678141 containerd[1508]: time="2026-01-17T00:20:32.675543586Z" level=warning msg="cleaning up after shim disconnected" id=9a591e70601ef043434e04392aa6179bbe0f47a6d92ebeea4b2c13acf5ddc63d namespace=k8s.io Jan 17 00:20:32.678141 containerd[1508]: time="2026-01-17T00:20:32.675586286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:32.691654 systemd[1]: Created slice kubepods-besteffort-pod669c9dd2_93ed_4be5_8b4c_834706d32358.slice - libcontainer container kubepods-besteffort-pod669c9dd2_93ed_4be5_8b4c_834706d32358.slice. Jan 17 00:20:32.703648 kubelet[2574]: I0117 00:20:32.702170 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vcbm\" (UniqueName: \"kubernetes.io/projected/10c4610a-ed07-4e29-932b-b9ab7749e6ed-kube-api-access-9vcbm\") pod \"calico-apiserver-7b598cf86d-jkqzc\" (UID: \"10c4610a-ed07-4e29-932b-b9ab7749e6ed\") " pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" Jan 17 00:20:32.703648 kubelet[2574]: I0117 00:20:32.702211 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phn2g\" (UniqueName: \"kubernetes.io/projected/2288281e-fdb7-48d8-b727-f0cc9e2d198b-kube-api-access-phn2g\") pod \"coredns-674b8bbfcf-hv54k\" (UID: \"2288281e-fdb7-48d8-b727-f0cc9e2d198b\") " pod="kube-system/coredns-674b8bbfcf-hv54k" Jan 17 00:20:32.703648 kubelet[2574]: I0117 00:20:32.702236 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2288281e-fdb7-48d8-b727-f0cc9e2d198b-config-volume\") pod \"coredns-674b8bbfcf-hv54k\" (UID: \"2288281e-fdb7-48d8-b727-f0cc9e2d198b\") " pod="kube-system/coredns-674b8bbfcf-hv54k" Jan 17 00:20:32.703648 kubelet[2574]: I0117 00:20:32.702258 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10c4610a-ed07-4e29-932b-b9ab7749e6ed-calico-apiserver-certs\") pod \"calico-apiserver-7b598cf86d-jkqzc\" (UID: \"10c4610a-ed07-4e29-932b-b9ab7749e6ed\") " pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" Jan 17 00:20:32.704724 containerd[1508]: time="2026-01-17T00:20:32.704258417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d8j7,Uid:669c9dd2-93ed-4be5-8b4c-834706d32358,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:32.719091 systemd[1]: Created slice kubepods-besteffort-pod10c4610a_ed07_4e29_932b_b9ab7749e6ed.slice - libcontainer container kubepods-besteffort-pod10c4610a_ed07_4e29_932b_b9ab7749e6ed.slice. Jan 17 00:20:32.733115 systemd[1]: Created slice kubepods-burstable-podfe9eb613_1b2e_4b40_8b1b_77be36bfdc32.slice - libcontainer container kubepods-burstable-podfe9eb613_1b2e_4b40_8b1b_77be36bfdc32.slice. Jan 17 00:20:32.748422 systemd[1]: Created slice kubepods-burstable-pod2288281e_fdb7_48d8_b727_f0cc9e2d198b.slice - libcontainer container kubepods-burstable-pod2288281e_fdb7_48d8_b727_f0cc9e2d198b.slice. Jan 17 00:20:32.760297 systemd[1]: Created slice kubepods-besteffort-pod7b9ac0b2_c7c5_4408_8470_3fecd940db64.slice - libcontainer container kubepods-besteffort-pod7b9ac0b2_c7c5_4408_8470_3fecd940db64.slice. Jan 17 00:20:32.770247 systemd[1]: Created slice kubepods-besteffort-podee43eed9_c394_4ae0_a0e3_7818f2df122b.slice - libcontainer container kubepods-besteffort-podee43eed9_c394_4ae0_a0e3_7818f2df122b.slice. Jan 17 00:20:32.776971 systemd[1]: Created slice kubepods-besteffort-podd3748345_d737_4edc_b312_ed0fa45e5e25.slice - libcontainer container kubepods-besteffort-podd3748345_d737_4edc_b312_ed0fa45e5e25.slice. Jan 17 00:20:32.786298 systemd[1]: Created slice kubepods-besteffort-pode8ec3d55_57ab_493d_b18c_44cba62fcddb.slice - libcontainer container kubepods-besteffort-pode8ec3d55_57ab_493d_b18c_44cba62fcddb.slice. Jan 17 00:20:32.792836 systemd[1]: Created slice kubepods-besteffort-poda4983325_8320_4092_8a15_0de07c45e1dd.slice - libcontainer container kubepods-besteffort-poda4983325_8320_4092_8a15_0de07c45e1dd.slice. Jan 17 00:20:32.799174 containerd[1508]: time="2026-01-17T00:20:32.799051195Z" level=error msg="Failed to destroy network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:32.800268 containerd[1508]: time="2026-01-17T00:20:32.799587015Z" level=error msg="encountered an error cleaning up failed sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:32.800268 containerd[1508]: time="2026-01-17T00:20:32.800099814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d8j7,Uid:669c9dd2-93ed-4be5-8b4c-834706d32358,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:32.800499 containerd[1508]: time="2026-01-17T00:20:32.800463295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:20:32.803538 kubelet[2574]: E0117 00:20:32.803431 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:32.803538 kubelet[2574]: E0117 00:20:32.803513 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:32.803538 kubelet[2574]: E0117 00:20:32.803528 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2d8j7" Jan 17 00:20:32.803636 kubelet[2574]: E0117 00:20:32.803561 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:32.805488 kubelet[2574]: I0117 00:20:32.805229 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7lgp\" (UniqueName: \"kubernetes.io/projected/e8ec3d55-57ab-493d-b18c-44cba62fcddb-kube-api-access-s7lgp\") pod \"calico-apiserver-79d8d794ff-xflgs\" (UID: \"e8ec3d55-57ab-493d-b18c-44cba62fcddb\") " pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" Jan 17 00:20:32.805488 kubelet[2574]: I0117 00:20:32.805277 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mxk2\" (UniqueName: \"kubernetes.io/projected/fe9eb613-1b2e-4b40-8b1b-77be36bfdc32-kube-api-access-7mxk2\") pod \"coredns-674b8bbfcf-tfsvt\" (UID: \"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32\") " pod="kube-system/coredns-674b8bbfcf-tfsvt" Jan 17 00:20:32.805488 kubelet[2574]: I0117 00:20:32.805291 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3748345-d737-4edc-b312-ed0fa45e5e25-goldmane-ca-bundle\") pod \"goldmane-666569f655-fw7xc\" (UID: \"d3748345-d737-4edc-b312-ed0fa45e5e25\") " pod="calico-system/goldmane-666569f655-fw7xc" Jan 17 00:20:32.805488 kubelet[2574]: I0117 00:20:32.805307 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmfz\" (UniqueName: \"kubernetes.io/projected/ee43eed9-c394-4ae0-a0e3-7818f2df122b-kube-api-access-qmmfz\") pod \"calico-apiserver-7b598cf86d-t5pf2\" (UID: \"ee43eed9-c394-4ae0-a0e3-7818f2df122b\") " pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" Jan 17 00:20:32.805488 kubelet[2574]: I0117 00:20:32.805321 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3748345-d737-4edc-b312-ed0fa45e5e25-config\") pod \"goldmane-666569f655-fw7xc\" (UID: \"d3748345-d737-4edc-b312-ed0fa45e5e25\") " pod="calico-system/goldmane-666569f655-fw7xc" Jan 17 00:20:32.805616 kubelet[2574]: I0117 00:20:32.805334 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d3748345-d737-4edc-b312-ed0fa45e5e25-goldmane-key-pair\") pod \"goldmane-666569f655-fw7xc\" (UID: \"d3748345-d737-4edc-b312-ed0fa45e5e25\") " pod="calico-system/goldmane-666569f655-fw7xc" Jan 17 00:20:32.805616 kubelet[2574]: I0117 00:20:32.805346 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8ec3d55-57ab-493d-b18c-44cba62fcddb-calico-apiserver-certs\") pod \"calico-apiserver-79d8d794ff-xflgs\" (UID: \"e8ec3d55-57ab-493d-b18c-44cba62fcddb\") " pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" Jan 17 00:20:32.805616 kubelet[2574]: I0117 00:20:32.805376 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-backend-key-pair\") pod \"whisker-5dd767c58f-tmjms\" (UID: \"a4983325-8320-4092-8a15-0de07c45e1dd\") " pod="calico-system/whisker-5dd767c58f-tmjms" Jan 17 00:20:32.805616 kubelet[2574]: I0117 00:20:32.805390 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-ca-bundle\") pod \"whisker-5dd767c58f-tmjms\" (UID: \"a4983325-8320-4092-8a15-0de07c45e1dd\") " pod="calico-system/whisker-5dd767c58f-tmjms" Jan 17 00:20:32.805616 kubelet[2574]: I0117 00:20:32.805401 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq6wf\" (UniqueName: \"kubernetes.io/projected/a4983325-8320-4092-8a15-0de07c45e1dd-kube-api-access-sq6wf\") pod \"whisker-5dd767c58f-tmjms\" (UID: \"a4983325-8320-4092-8a15-0de07c45e1dd\") " pod="calico-system/whisker-5dd767c58f-tmjms" Jan 17 00:20:32.805735 kubelet[2574]: I0117 00:20:32.805424 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee43eed9-c394-4ae0-a0e3-7818f2df122b-calico-apiserver-certs\") pod \"calico-apiserver-7b598cf86d-t5pf2\" (UID: \"ee43eed9-c394-4ae0-a0e3-7818f2df122b\") " pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" Jan 17 00:20:32.805735 kubelet[2574]: I0117 00:20:32.805439 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b9ac0b2-c7c5-4408-8470-3fecd940db64-tigera-ca-bundle\") pod \"calico-kube-controllers-7779db755c-krrrf\" (UID: \"7b9ac0b2-c7c5-4408-8470-3fecd940db64\") " pod="calico-system/calico-kube-controllers-7779db755c-krrrf" Jan 17 00:20:32.805735 kubelet[2574]: I0117 00:20:32.805453 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ck7h\" (UniqueName: \"kubernetes.io/projected/d3748345-d737-4edc-b312-ed0fa45e5e25-kube-api-access-5ck7h\") pod \"goldmane-666569f655-fw7xc\" (UID: \"d3748345-d737-4edc-b312-ed0fa45e5e25\") " pod="calico-system/goldmane-666569f655-fw7xc" Jan 17 00:20:32.807248 kubelet[2574]: I0117 00:20:32.806880 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe9eb613-1b2e-4b40-8b1b-77be36bfdc32-config-volume\") pod \"coredns-674b8bbfcf-tfsvt\" (UID: \"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32\") " pod="kube-system/coredns-674b8bbfcf-tfsvt" Jan 17 00:20:32.810914 kubelet[2574]: I0117 00:20:32.810653 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qj4\" (UniqueName: \"kubernetes.io/projected/7b9ac0b2-c7c5-4408-8470-3fecd940db64-kube-api-access-z2qj4\") pod \"calico-kube-controllers-7779db755c-krrrf\" (UID: \"7b9ac0b2-c7c5-4408-8470-3fecd940db64\") " pod="calico-system/calico-kube-controllers-7779db755c-krrrf" Jan 17 00:20:33.028426 containerd[1508]: time="2026-01-17T00:20:33.027862740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-jkqzc,Uid:10c4610a-ed07-4e29-932b-b9ab7749e6ed,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:33.038458 containerd[1508]: time="2026-01-17T00:20:33.037946845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfsvt,Uid:fe9eb613-1b2e-4b40-8b1b-77be36bfdc32,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:33.055977 containerd[1508]: time="2026-01-17T00:20:33.055759696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hv54k,Uid:2288281e-fdb7-48d8-b727-f0cc9e2d198b,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:33.070085 containerd[1508]: time="2026-01-17T00:20:33.070039318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7779db755c-krrrf,Uid:7b9ac0b2-c7c5-4408-8470-3fecd940db64,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:33.075231 containerd[1508]: time="2026-01-17T00:20:33.074950985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-t5pf2,Uid:ee43eed9-c394-4ae0-a0e3-7818f2df122b,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:33.081666 containerd[1508]: time="2026-01-17T00:20:33.081450362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw7xc,Uid:d3748345-d737-4edc-b312-ed0fa45e5e25,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:33.091682 containerd[1508]: time="2026-01-17T00:20:33.091638737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d8d794ff-xflgs,Uid:e8ec3d55-57ab-493d-b18c-44cba62fcddb,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:33.101684 containerd[1508]: time="2026-01-17T00:20:33.101522951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd767c58f-tmjms,Uid:a4983325-8320-4092-8a15-0de07c45e1dd,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:33.209330 containerd[1508]: time="2026-01-17T00:20:33.209288114Z" level=error msg="Failed to destroy network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.209782 containerd[1508]: time="2026-01-17T00:20:33.209764775Z" level=error msg="encountered an error cleaning up failed sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.209880 containerd[1508]: time="2026-01-17T00:20:33.209861994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-jkqzc,Uid:10c4610a-ed07-4e29-932b-b9ab7749e6ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.210520 kubelet[2574]: E0117 00:20:33.210183 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.210520 kubelet[2574]: E0117 00:20:33.210238 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" Jan 17 00:20:33.210520 kubelet[2574]: E0117 00:20:33.210260 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" Jan 17 00:20:33.210626 kubelet[2574]: E0117 00:20:33.210306 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b598cf86d-jkqzc_calico-apiserver(10c4610a-ed07-4e29-932b-b9ab7749e6ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b598cf86d-jkqzc_calico-apiserver(10c4610a-ed07-4e29-932b-b9ab7749e6ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:20:33.238823 containerd[1508]: time="2026-01-17T00:20:33.238572149Z" level=error msg="Failed to destroy network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.239586 containerd[1508]: time="2026-01-17T00:20:33.239566149Z" level=error msg="encountered an error cleaning up failed sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.239950 containerd[1508]: time="2026-01-17T00:20:33.239914628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hv54k,Uid:2288281e-fdb7-48d8-b727-f0cc9e2d198b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.241080 kubelet[2574]: E0117 00:20:33.240199 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.241080 kubelet[2574]: E0117 00:20:33.240246 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hv54k" Jan 17 00:20:33.241080 kubelet[2574]: E0117 00:20:33.240263 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hv54k" Jan 17 00:20:33.241174 kubelet[2574]: E0117 00:20:33.240310 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hv54k_kube-system(2288281e-fdb7-48d8-b727-f0cc9e2d198b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hv54k_kube-system(2288281e-fdb7-48d8-b727-f0cc9e2d198b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hv54k" podUID="2288281e-fdb7-48d8-b727-f0cc9e2d198b" Jan 17 00:20:33.257127 containerd[1508]: time="2026-01-17T00:20:33.257049370Z" level=error msg="Failed to destroy network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.258291 containerd[1508]: time="2026-01-17T00:20:33.258255548Z" level=error msg="encountered an error cleaning up failed sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.258548 containerd[1508]: time="2026-01-17T00:20:33.258497148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfsvt,Uid:fe9eb613-1b2e-4b40-8b1b-77be36bfdc32,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.260168 kubelet[2574]: E0117 00:20:33.259817 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.260168 kubelet[2574]: E0117 00:20:33.259875 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tfsvt" Jan 17 00:20:33.260168 kubelet[2574]: E0117 00:20:33.259893 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tfsvt" Jan 17 00:20:33.260256 kubelet[2574]: E0117 00:20:33.259949 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tfsvt_kube-system(fe9eb613-1b2e-4b40-8b1b-77be36bfdc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tfsvt_kube-system(fe9eb613-1b2e-4b40-8b1b-77be36bfdc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tfsvt" podUID="fe9eb613-1b2e-4b40-8b1b-77be36bfdc32" Jan 17 00:20:33.267209 containerd[1508]: time="2026-01-17T00:20:33.267152274Z" level=error msg="Failed to destroy network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.267793 containerd[1508]: time="2026-01-17T00:20:33.267767743Z" level=error msg="encountered an error cleaning up failed sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.267836 containerd[1508]: time="2026-01-17T00:20:33.267812133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw7xc,Uid:d3748345-d737-4edc-b312-ed0fa45e5e25,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.267984 kubelet[2574]: E0117 00:20:33.267931 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.267984 kubelet[2574]: E0117 00:20:33.267965 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fw7xc" Jan 17 00:20:33.267984 kubelet[2574]: E0117 00:20:33.267979 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fw7xc" Jan 17 00:20:33.268057 kubelet[2574]: E0117 00:20:33.268011 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fw7xc_calico-system(d3748345-d737-4edc-b312-ed0fa45e5e25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fw7xc_calico-system(d3748345-d737-4edc-b312-ed0fa45e5e25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:20:33.305687 containerd[1508]: time="2026-01-17T00:20:33.304949823Z" level=error msg="Failed to destroy network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.305687 containerd[1508]: time="2026-01-17T00:20:33.305273574Z" level=error msg="encountered an error cleaning up failed sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.305687 containerd[1508]: time="2026-01-17T00:20:33.305310924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7779db755c-krrrf,Uid:7b9ac0b2-c7c5-4408-8470-3fecd940db64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.305873 kubelet[2574]: E0117 00:20:33.305469 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.305873 kubelet[2574]: E0117 00:20:33.305529 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" Jan 17 00:20:33.305873 kubelet[2574]: E0117 00:20:33.305545 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" Jan 17 00:20:33.305959 kubelet[2574]: E0117 00:20:33.305580 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7779db755c-krrrf_calico-system(7b9ac0b2-c7c5-4408-8470-3fecd940db64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7779db755c-krrrf_calico-system(7b9ac0b2-c7c5-4408-8470-3fecd940db64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:20:33.313541 containerd[1508]: time="2026-01-17T00:20:33.313497869Z" level=error msg="Failed to destroy network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.313992 containerd[1508]: time="2026-01-17T00:20:33.313974329Z" level=error msg="encountered an error cleaning up failed sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.314182 containerd[1508]: time="2026-01-17T00:20:33.314073289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-t5pf2,Uid:ee43eed9-c394-4ae0-a0e3-7818f2df122b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.314242 kubelet[2574]: E0117 00:20:33.314200 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.314274 kubelet[2574]: E0117 00:20:33.314250 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" Jan 17 00:20:33.314274 kubelet[2574]: E0117 00:20:33.314268 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" Jan 17 00:20:33.314340 kubelet[2574]: E0117 00:20:33.314309 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b598cf86d-t5pf2_calico-apiserver(ee43eed9-c394-4ae0-a0e3-7818f2df122b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b598cf86d-t5pf2_calico-apiserver(ee43eed9-c394-4ae0-a0e3-7818f2df122b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:20:33.317712 containerd[1508]: time="2026-01-17T00:20:33.317670567Z" level=error msg="Failed to destroy network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.318967 containerd[1508]: time="2026-01-17T00:20:33.318942276Z" level=error msg="encountered an error cleaning up failed sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.319012 containerd[1508]: time="2026-01-17T00:20:33.318978066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d8d794ff-xflgs,Uid:e8ec3d55-57ab-493d-b18c-44cba62fcddb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.319310 kubelet[2574]: E0117 00:20:33.319077 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.319310 kubelet[2574]: E0117 00:20:33.319110 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" Jan 17 00:20:33.319310 kubelet[2574]: E0117 00:20:33.319124 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" Jan 17 00:20:33.319384 kubelet[2574]: E0117 00:20:33.319153 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d8d794ff-xflgs_calico-apiserver(e8ec3d55-57ab-493d-b18c-44cba62fcddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d8d794ff-xflgs_calico-apiserver(e8ec3d55-57ab-493d-b18c-44cba62fcddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:20:33.327631 containerd[1508]: time="2026-01-17T00:20:33.327585672Z" level=error msg="Failed to destroy network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.327892 containerd[1508]: time="2026-01-17T00:20:33.327863702Z" level=error msg="encountered an error cleaning up failed sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.327916 containerd[1508]: time="2026-01-17T00:20:33.327897341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd767c58f-tmjms,Uid:a4983325-8320-4092-8a15-0de07c45e1dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.328052 kubelet[2574]: E0117 00:20:33.328022 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.328089 kubelet[2574]: E0117 00:20:33.328070 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd767c58f-tmjms" Jan 17 00:20:33.328119 kubelet[2574]: E0117 00:20:33.328096 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd767c58f-tmjms" Jan 17 00:20:33.328207 kubelet[2574]: E0117 00:20:33.328157 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dd767c58f-tmjms_calico-system(a4983325-8320-4092-8a15-0de07c45e1dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dd767c58f-tmjms_calico-system(a4983325-8320-4092-8a15-0de07c45e1dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd767c58f-tmjms" podUID="a4983325-8320-4092-8a15-0de07c45e1dd" Jan 17 00:20:33.800202 kubelet[2574]: I0117 00:20:33.799868 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:20:33.803926 containerd[1508]: time="2026-01-17T00:20:33.803861750Z" level=info msg="StopPodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\"" Jan 17 00:20:33.804160 containerd[1508]: time="2026-01-17T00:20:33.804104249Z" level=info msg="Ensure that sandbox 109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3 in task-service has been cleanup successfully" Jan 17 00:20:33.805820 kubelet[2574]: I0117 00:20:33.805405 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:20:33.807683 containerd[1508]: time="2026-01-17T00:20:33.807400899Z" level=info msg="StopPodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\"" Jan 17 00:20:33.808038 containerd[1508]: time="2026-01-17T00:20:33.807794008Z" level=info msg="Ensure that sandbox ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80 in task-service has been cleanup successfully" Jan 17 00:20:33.818071 kubelet[2574]: I0117 00:20:33.816692 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:33.819186 containerd[1508]: time="2026-01-17T00:20:33.819084852Z" level=info msg="StopPodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\"" Jan 17 00:20:33.819891 containerd[1508]: time="2026-01-17T00:20:33.819858661Z" level=info msg="Ensure that sandbox 90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef in task-service has been cleanup successfully" Jan 17 00:20:33.827741 kubelet[2574]: I0117 00:20:33.827705 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:20:33.830474 containerd[1508]: time="2026-01-17T00:20:33.829389106Z" level=info msg="StopPodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\"" Jan 17 00:20:33.830474 containerd[1508]: time="2026-01-17T00:20:33.829816627Z" level=info msg="Ensure that sandbox 1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04 in task-service has been cleanup successfully" Jan 17 00:20:33.835058 kubelet[2574]: I0117 00:20:33.835011 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:20:33.838544 containerd[1508]: time="2026-01-17T00:20:33.838486071Z" level=info msg="StopPodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\"" Jan 17 00:20:33.842904 containerd[1508]: time="2026-01-17T00:20:33.842419779Z" level=info msg="Ensure that sandbox aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c in task-service has been cleanup successfully" Jan 17 00:20:33.846710 kubelet[2574]: I0117 00:20:33.846657 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:20:33.849036 containerd[1508]: time="2026-01-17T00:20:33.848894636Z" level=info msg="StopPodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\"" Jan 17 00:20:33.849339 containerd[1508]: time="2026-01-17T00:20:33.849301486Z" level=info msg="Ensure that sandbox 62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951 in task-service has been cleanup successfully" Jan 17 00:20:33.857330 kubelet[2574]: I0117 00:20:33.857248 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:20:33.861026 containerd[1508]: time="2026-01-17T00:20:33.860993479Z" level=info msg="StopPodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\"" Jan 17 00:20:33.861319 containerd[1508]: time="2026-01-17T00:20:33.861296980Z" level=info msg="Ensure that sandbox 057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f in task-service has been cleanup successfully" Jan 17 00:20:33.865065 kubelet[2574]: I0117 00:20:33.865039 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:20:33.874898 containerd[1508]: time="2026-01-17T00:20:33.874851272Z" level=info msg="StopPodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\"" Jan 17 00:20:33.874898 containerd[1508]: time="2026-01-17T00:20:33.875052822Z" level=info msg="Ensure that sandbox ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733 in task-service has been cleanup successfully" Jan 17 00:20:33.884630 kubelet[2574]: I0117 00:20:33.884374 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:20:33.888983 containerd[1508]: time="2026-01-17T00:20:33.888961544Z" level=info msg="StopPodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\"" Jan 17 00:20:33.889363 containerd[1508]: time="2026-01-17T00:20:33.889192885Z" level=info msg="Ensure that sandbox a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171 in task-service has been cleanup successfully" Jan 17 00:20:33.950509 containerd[1508]: time="2026-01-17T00:20:33.950453872Z" level=error msg="StopPodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" failed" error="failed to destroy network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.952655 kubelet[2574]: E0117 00:20:33.950865 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:20:33.952655 kubelet[2574]: E0117 00:20:33.950921 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3"} Jan 17 00:20:33.952655 kubelet[2574]: E0117 00:20:33.950968 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4983325-8320-4092-8a15-0de07c45e1dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.952655 kubelet[2574]: E0117 00:20:33.950990 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4983325-8320-4092-8a15-0de07c45e1dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd767c58f-tmjms" podUID="a4983325-8320-4092-8a15-0de07c45e1dd" Jan 17 00:20:33.963223 containerd[1508]: time="2026-01-17T00:20:33.963180485Z" level=error msg="StopPodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" failed" error="failed to destroy network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.963745 kubelet[2574]: E0117 00:20:33.963712 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:20:33.963796 kubelet[2574]: E0117 00:20:33.963756 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951"} Jan 17 00:20:33.963796 kubelet[2574]: E0117 00:20:33.963780 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.963867 kubelet[2574]: E0117 00:20:33.963800 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tfsvt" podUID="fe9eb613-1b2e-4b40-8b1b-77be36bfdc32" Jan 17 00:20:33.964812 containerd[1508]: time="2026-01-17T00:20:33.964780084Z" level=error msg="StopPodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" failed" error="failed to destroy network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.965035 kubelet[2574]: E0117 00:20:33.965011 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:20:33.965079 kubelet[2574]: E0117 00:20:33.965056 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80"} Jan 17 00:20:33.965079 kubelet[2574]: E0117 00:20:33.965072 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee43eed9-c394-4ae0-a0e3-7818f2df122b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.965136 kubelet[2574]: E0117 00:20:33.965087 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee43eed9-c394-4ae0-a0e3-7818f2df122b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:20:33.971132 containerd[1508]: time="2026-01-17T00:20:33.971105811Z" level=error msg="StopPodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" failed" error="failed to destroy network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.971256 kubelet[2574]: E0117 00:20:33.971226 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:20:33.971256 kubelet[2574]: E0117 00:20:33.971252 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733"} Jan 17 00:20:33.971354 kubelet[2574]: E0117 00:20:33.971270 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"669c9dd2-93ed-4be5-8b4c-834706d32358\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.971354 kubelet[2574]: E0117 00:20:33.971283 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"669c9dd2-93ed-4be5-8b4c-834706d32358\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:33.985022 containerd[1508]: time="2026-01-17T00:20:33.984977003Z" level=error msg="StopPodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" failed" error="failed to destroy network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.985466 kubelet[2574]: E0117 00:20:33.985418 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:20:33.985530 kubelet[2574]: E0117 00:20:33.985466 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04"} Jan 17 00:20:33.985530 kubelet[2574]: E0117 00:20:33.985490 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3748345-d737-4edc-b312-ed0fa45e5e25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.985530 kubelet[2574]: E0117 00:20:33.985509 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3748345-d737-4edc-b312-ed0fa45e5e25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:20:33.986489 containerd[1508]: time="2026-01-17T00:20:33.986472083Z" level=error msg="StopPodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" failed" error="failed to destroy network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.986716 kubelet[2574]: E0117 00:20:33.986690 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:20:33.986781 kubelet[2574]: E0117 00:20:33.986763 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c"} Jan 17 00:20:33.987405 kubelet[2574]: E0117 00:20:33.987388 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2288281e-fdb7-48d8-b727-f0cc9e2d198b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.987477 kubelet[2574]: E0117 00:20:33.987463 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2288281e-fdb7-48d8-b727-f0cc9e2d198b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hv54k" podUID="2288281e-fdb7-48d8-b727-f0cc9e2d198b" Jan 17 00:20:33.988131 containerd[1508]: time="2026-01-17T00:20:33.988106312Z" level=error msg="StopPodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" failed" error="failed to destroy network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.988238 kubelet[2574]: E0117 00:20:33.988214 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:33.988238 kubelet[2574]: E0117 00:20:33.988237 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef"} Jan 17 00:20:33.988328 kubelet[2574]: E0117 00:20:33.988252 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b9ac0b2-c7c5-4408-8470-3fecd940db64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.988328 kubelet[2574]: E0117 00:20:33.988266 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b9ac0b2-c7c5-4408-8470-3fecd940db64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:20:33.990181 containerd[1508]: time="2026-01-17T00:20:33.990158001Z" level=error msg="StopPodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" failed" error="failed to destroy network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.990320 kubelet[2574]: E0117 00:20:33.990300 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:20:33.990341 kubelet[2574]: E0117 00:20:33.990322 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f"} Jan 17 00:20:33.990341 kubelet[2574]: E0117 00:20:33.990337 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10c4610a-ed07-4e29-932b-b9ab7749e6ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.990392 kubelet[2574]: E0117 00:20:33.990350 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10c4610a-ed07-4e29-932b-b9ab7749e6ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:20:33.991454 containerd[1508]: time="2026-01-17T00:20:33.991428281Z" level=error msg="StopPodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" failed" error="failed to destroy network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:33.991571 kubelet[2574]: E0117 00:20:33.991542 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:20:33.991571 kubelet[2574]: E0117 00:20:33.991569 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171"} Jan 17 00:20:33.991632 kubelet[2574]: E0117 00:20:33.991584 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8ec3d55-57ab-493d-b18c-44cba62fcddb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:33.991632 kubelet[2574]: E0117 00:20:33.991616 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8ec3d55-57ab-493d-b18c-44cba62fcddb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:20:40.783760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697932913.mount: Deactivated successfully. Jan 17 00:20:40.840387 containerd[1508]: time="2026-01-17T00:20:40.840330002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:40.843347 containerd[1508]: time="2026-01-17T00:20:40.843319693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:20:40.845963 containerd[1508]: time="2026-01-17T00:20:40.845258203Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:40.847471 containerd[1508]: time="2026-01-17T00:20:40.847454313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:40.848307 containerd[1508]: time="2026-01-17T00:20:40.848288453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.047803838s" Jan 17 00:20:40.848585 containerd[1508]: time="2026-01-17T00:20:40.848559733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:20:40.877221 containerd[1508]: time="2026-01-17T00:20:40.877164136Z" level=info msg="CreateContainer within sandbox \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:20:40.892875 containerd[1508]: time="2026-01-17T00:20:40.892835359Z" level=info msg="CreateContainer within sandbox \"7609f6aa9ddfbbd958937ffc0a522d40541655d8b8d1f49f1e1ffa00c1c3272c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f\"" Jan 17 00:20:40.894184 containerd[1508]: time="2026-01-17T00:20:40.893991258Z" level=info msg="StartContainer for \"24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f\"" Jan 17 00:20:40.937828 systemd[1]: Started cri-containerd-24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f.scope - libcontainer container 24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f. Jan 17 00:20:40.977974 containerd[1508]: time="2026-01-17T00:20:40.977945218Z" level=info msg="StartContainer for \"24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f\" returns successfully" Jan 17 00:20:41.057348 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:20:41.057444 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:20:41.189367 containerd[1508]: time="2026-01-17T00:20:41.189305996Z" level=info msg="StopPodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\"" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.266 [INFO][3880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.267 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" iface="eth0" netns="/var/run/netns/cni-3b23705a-3373-ed21-7646-0a401fa97d1d" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.267 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" iface="eth0" netns="/var/run/netns/cni-3b23705a-3373-ed21-7646-0a401fa97d1d" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.269 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" iface="eth0" netns="/var/run/netns/cni-3b23705a-3373-ed21-7646-0a401fa97d1d" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.269 [INFO][3880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.269 [INFO][3880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.299 [INFO][3888] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.300 [INFO][3888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.300 [INFO][3888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.312 [WARNING][3888] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.312 [INFO][3888] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.314 [INFO][3888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:41.320680 containerd[1508]: 2026-01-17 00:20:41.317 [INFO][3880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:20:41.321929 containerd[1508]: time="2026-01-17T00:20:41.320773112Z" level=info msg="TearDown network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" successfully" Jan 17 00:20:41.321929 containerd[1508]: time="2026-01-17T00:20:41.320799282Z" level=info msg="StopPodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" returns successfully" Jan 17 00:20:41.377271 kubelet[2574]: I0117 00:20:41.377229 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-ca-bundle\") pod \"a4983325-8320-4092-8a15-0de07c45e1dd\" (UID: \"a4983325-8320-4092-8a15-0de07c45e1dd\") " Jan 17 00:20:41.377271 kubelet[2574]: I0117 00:20:41.377273 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-backend-key-pair\") pod \"a4983325-8320-4092-8a15-0de07c45e1dd\" (UID: \"a4983325-8320-4092-8a15-0de07c45e1dd\") " Jan 17 00:20:41.377771 kubelet[2574]: I0117 00:20:41.377295 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq6wf\" (UniqueName: \"kubernetes.io/projected/a4983325-8320-4092-8a15-0de07c45e1dd-kube-api-access-sq6wf\") pod \"a4983325-8320-4092-8a15-0de07c45e1dd\" (UID: \"a4983325-8320-4092-8a15-0de07c45e1dd\") " Jan 17 00:20:41.380675 kubelet[2574]: I0117 00:20:41.378887 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a4983325-8320-4092-8a15-0de07c45e1dd" (UID: "a4983325-8320-4092-8a15-0de07c45e1dd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:20:41.382618 kubelet[2574]: I0117 00:20:41.382546 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4983325-8320-4092-8a15-0de07c45e1dd-kube-api-access-sq6wf" (OuterVolumeSpecName: "kube-api-access-sq6wf") pod "a4983325-8320-4092-8a15-0de07c45e1dd" (UID: "a4983325-8320-4092-8a15-0de07c45e1dd"). InnerVolumeSpecName "kube-api-access-sq6wf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:20:41.383541 kubelet[2574]: I0117 00:20:41.383517 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a4983325-8320-4092-8a15-0de07c45e1dd" (UID: "a4983325-8320-4092-8a15-0de07c45e1dd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:20:41.478080 kubelet[2574]: I0117 00:20:41.477877 2574 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-ca-bundle\") on node \"ci-4081-3-6-n-8c81c3eeb1\" DevicePath \"\"" Jan 17 00:20:41.478080 kubelet[2574]: I0117 00:20:41.477926 2574 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4983325-8320-4092-8a15-0de07c45e1dd-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-8c81c3eeb1\" DevicePath \"\"" Jan 17 00:20:41.478080 kubelet[2574]: I0117 00:20:41.477948 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sq6wf\" (UniqueName: \"kubernetes.io/projected/a4983325-8320-4092-8a15-0de07c45e1dd-kube-api-access-sq6wf\") on node \"ci-4081-3-6-n-8c81c3eeb1\" DevicePath \"\"" Jan 17 00:20:41.680238 systemd[1]: Removed slice kubepods-besteffort-poda4983325_8320_4092_8a15_0de07c45e1dd.slice - libcontainer container kubepods-besteffort-poda4983325_8320_4092_8a15_0de07c45e1dd.slice. Jan 17 00:20:41.785435 systemd[1]: run-netns-cni\x2d3b23705a\x2d3373\x2ded21\x2d7646\x2d0a401fa97d1d.mount: Deactivated successfully. Jan 17 00:20:41.785684 systemd[1]: var-lib-kubelet-pods-a4983325\x2d8320\x2d4092\x2d8a15\x2d0de07c45e1dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsq6wf.mount: Deactivated successfully. Jan 17 00:20:41.785824 systemd[1]: var-lib-kubelet-pods-a4983325\x2d8320\x2d4092\x2d8a15\x2d0de07c45e1dd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:20:41.951706 kubelet[2574]: I0117 00:20:41.951542 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zws6q" podStartSLOduration=1.598159115 podStartE2EDuration="19.95149099s" podCreationTimestamp="2026-01-17 00:20:22 +0000 UTC" firstStartedPulling="2026-01-17 00:20:22.496908699 +0000 UTC m=+22.973854948" lastFinishedPulling="2026-01-17 00:20:40.850240584 +0000 UTC m=+41.327186823" observedRunningTime="2026-01-17 00:20:41.947508219 +0000 UTC m=+42.424454498" watchObservedRunningTime="2026-01-17 00:20:41.95149099 +0000 UTC m=+42.428437269" Jan 17 00:20:42.046393 systemd[1]: Created slice kubepods-besteffort-pode0d4c934_d914_4aab_9515_da3ebc2d4bad.slice - libcontainer container kubepods-besteffort-pode0d4c934_d914_4aab_9515_da3ebc2d4bad.slice. Jan 17 00:20:42.082866 kubelet[2574]: I0117 00:20:42.082781 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0d4c934-d914-4aab-9515-da3ebc2d4bad-whisker-backend-key-pair\") pod \"whisker-df997d949-g829z\" (UID: \"e0d4c934-d914-4aab-9515-da3ebc2d4bad\") " pod="calico-system/whisker-df997d949-g829z" Jan 17 00:20:42.082866 kubelet[2574]: I0117 00:20:42.082861 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt445\" (UniqueName: \"kubernetes.io/projected/e0d4c934-d914-4aab-9515-da3ebc2d4bad-kube-api-access-vt445\") pod \"whisker-df997d949-g829z\" (UID: \"e0d4c934-d914-4aab-9515-da3ebc2d4bad\") " pod="calico-system/whisker-df997d949-g829z" Jan 17 00:20:42.083084 kubelet[2574]: I0117 00:20:42.082893 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0d4c934-d914-4aab-9515-da3ebc2d4bad-whisker-ca-bundle\") pod \"whisker-df997d949-g829z\" (UID: \"e0d4c934-d914-4aab-9515-da3ebc2d4bad\") " pod="calico-system/whisker-df997d949-g829z" Jan 17 00:20:42.353820 containerd[1508]: time="2026-01-17T00:20:42.353304339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df997d949-g829z,Uid:e0d4c934-d914-4aab-9515-da3ebc2d4bad,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:42.548544 systemd-networkd[1408]: cali2e75714ce8d: Link UP Jan 17 00:20:42.550038 systemd-networkd[1408]: cali2e75714ce8d: Gained carrier Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.415 [INFO][3909] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.433 [INFO][3909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0 whisker-df997d949- calico-system e0d4c934-d914-4aab-9515-da3ebc2d4bad 913 0 2026-01-17 00:20:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:df997d949 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 whisker-df997d949-g829z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e75714ce8d [] [] }} ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.434 [INFO][3909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.495 [INFO][3931] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" HandleID="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.496 [INFO][3931] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" HandleID="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"whisker-df997d949-g829z", "timestamp":"2026-01-17 00:20:42.495540885 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.496 [INFO][3931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.498 [INFO][3931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.498 [INFO][3931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.508 [INFO][3931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.514 [INFO][3931] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.518 [INFO][3931] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.521 [INFO][3931] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.523 [INFO][3931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.524 [INFO][3931] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.525 [INFO][3931] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696 Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.530 [INFO][3931] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.533 [INFO][3931] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.129/26] block=192.168.115.128/26 handle="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.533 [INFO][3931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.129/26] handle="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.533 [INFO][3931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:42.568765 containerd[1508]: 2026-01-17 00:20:42.533 [INFO][3931] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.129/26] IPv6=[] ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" HandleID="k8s-pod-network.9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.569859 containerd[1508]: 2026-01-17 00:20:42.535 [INFO][3909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0", GenerateName:"whisker-df997d949-", Namespace:"calico-system", SelfLink:"", UID:"e0d4c934-d914-4aab-9515-da3ebc2d4bad", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"df997d949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"whisker-df997d949-g829z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e75714ce8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:42.569859 containerd[1508]: 2026-01-17 00:20:42.536 [INFO][3909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.129/32] ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.569859 containerd[1508]: 2026-01-17 00:20:42.536 [INFO][3909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e75714ce8d ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.569859 containerd[1508]: 2026-01-17 00:20:42.551 [INFO][3909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.569859 containerd[1508]: 2026-01-17 00:20:42.551 [INFO][3909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0", GenerateName:"whisker-df997d949-", Namespace:"calico-system", SelfLink:"", UID:"e0d4c934-d914-4aab-9515-da3ebc2d4bad", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"df997d949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696", Pod:"whisker-df997d949-g829z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e75714ce8d", MAC:"82:e5:b1:9b:d7:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:42.569859 containerd[1508]: 2026-01-17 00:20:42.560 [INFO][3909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696" Namespace="calico-system" Pod="whisker-df997d949-g829z" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--df997d949--g829z-eth0" Jan 17 00:20:42.592324 containerd[1508]: time="2026-01-17T00:20:42.592112190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:42.592523 containerd[1508]: time="2026-01-17T00:20:42.592187180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:42.592523 containerd[1508]: time="2026-01-17T00:20:42.592201840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:42.592523 containerd[1508]: time="2026-01-17T00:20:42.592291619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:42.621739 systemd[1]: Started cri-containerd-9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696.scope - libcontainer container 9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696. Jan 17 00:20:42.691648 containerd[1508]: time="2026-01-17T00:20:42.691167585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df997d949-g829z,Uid:e0d4c934-d914-4aab-9515-da3ebc2d4bad,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ef3037c7ccd8919ded861da8ed92f4683d047edbf5ceea4580e4f36c204b696\"" Jan 17 00:20:42.693858 containerd[1508]: time="2026-01-17T00:20:42.693610636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:20:43.116367 containerd[1508]: time="2026-01-17T00:20:43.115942620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:43.118932 containerd[1508]: time="2026-01-17T00:20:43.118823821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:20:43.119647 containerd[1508]: time="2026-01-17T00:20:43.118973030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:20:43.119776 kubelet[2574]: E0117 00:20:43.119170 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:43.119776 kubelet[2574]: E0117 00:20:43.119233 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:43.124477 kubelet[2574]: E0117 00:20:43.124384 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12f2f0817a3b40168af76823b3573c15,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:43.127894 containerd[1508]: time="2026-01-17T00:20:43.127799923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:20:43.669200 kubelet[2574]: I0117 00:20:43.669148 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4983325-8320-4092-8a15-0de07c45e1dd" path="/var/lib/kubelet/pods/a4983325-8320-4092-8a15-0de07c45e1dd/volumes" Jan 17 00:20:43.862829 containerd[1508]: time="2026-01-17T00:20:43.862134856Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:43.867299 containerd[1508]: time="2026-01-17T00:20:43.865417837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:20:43.867299 containerd[1508]: time="2026-01-17T00:20:43.865444307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:43.867474 kubelet[2574]: E0117 00:20:43.866143 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:43.867474 kubelet[2574]: E0117 00:20:43.866200 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:43.867690 kubelet[2574]: E0117 00:20:43.866344 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:43.867852 kubelet[2574]: E0117 00:20:43.867794 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:20:43.934245 kubelet[2574]: E0117 00:20:43.934111 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:20:44.193989 systemd-networkd[1408]: cali2e75714ce8d: Gained IPv6LL Jan 17 00:20:44.667056 containerd[1508]: time="2026-01-17T00:20:44.666989570Z" level=info msg="StopPodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\"" Jan 17 00:20:44.667969 containerd[1508]: time="2026-01-17T00:20:44.667905440Z" level=info msg="StopPodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\"" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.756 [INFO][4157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.757 [INFO][4157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" iface="eth0" netns="/var/run/netns/cni-0d27cd8b-a5c1-a153-9980-a36b63190ce9" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.759 [INFO][4157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" iface="eth0" netns="/var/run/netns/cni-0d27cd8b-a5c1-a153-9980-a36b63190ce9" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.760 [INFO][4157] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" iface="eth0" netns="/var/run/netns/cni-0d27cd8b-a5c1-a153-9980-a36b63190ce9" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.760 [INFO][4157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.760 [INFO][4157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.801 [INFO][4171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.802 [INFO][4171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.802 [INFO][4171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.810 [WARNING][4171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.810 [INFO][4171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.812 [INFO][4171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:44.818213 containerd[1508]: 2026-01-17 00:20:44.815 [INFO][4157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:20:44.822794 containerd[1508]: time="2026-01-17T00:20:44.820775137Z" level=info msg="TearDown network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" successfully" Jan 17 00:20:44.822794 containerd[1508]: time="2026-01-17T00:20:44.820838258Z" level=info msg="StopPodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" returns successfully" Jan 17 00:20:44.823933 containerd[1508]: time="2026-01-17T00:20:44.823762209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hv54k,Uid:2288281e-fdb7-48d8-b727-f0cc9e2d198b,Namespace:kube-system,Attempt:1,}" Jan 17 00:20:44.827397 systemd[1]: run-netns-cni\x2d0d27cd8b\x2da5c1\x2da153\x2d9980\x2da36b63190ce9.mount: Deactivated successfully. Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.760 [INFO][4158] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.761 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" iface="eth0" netns="/var/run/netns/cni-3f1a9f95-fd3b-c35c-b779-969efde18f8a" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.762 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" iface="eth0" netns="/var/run/netns/cni-3f1a9f95-fd3b-c35c-b779-969efde18f8a" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.763 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" iface="eth0" netns="/var/run/netns/cni-3f1a9f95-fd3b-c35c-b779-969efde18f8a" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.763 [INFO][4158] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.764 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.808 [INFO][4173] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.808 [INFO][4173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.812 [INFO][4173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.830 [WARNING][4173] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.831 [INFO][4173] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.835 [INFO][4173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:44.847178 containerd[1508]: 2026-01-17 00:20:44.841 [INFO][4158] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:20:44.851068 containerd[1508]: time="2026-01-17T00:20:44.849065369Z" level=info msg="TearDown network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" successfully" Jan 17 00:20:44.851068 containerd[1508]: time="2026-01-17T00:20:44.849111509Z" level=info msg="StopPodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" returns successfully" Jan 17 00:20:44.851388 containerd[1508]: time="2026-01-17T00:20:44.851311080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d8d794ff-xflgs,Uid:e8ec3d55-57ab-493d-b18c-44cba62fcddb,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:20:44.858014 systemd[1]: run-netns-cni\x2d3f1a9f95\x2dfd3b\x2dc35c\x2db779\x2d969efde18f8a.mount: Deactivated successfully. Jan 17 00:20:45.007746 systemd-networkd[1408]: cali60b8e7abe7f: Link UP Jan 17 00:20:45.008722 systemd-networkd[1408]: cali60b8e7abe7f: Gained carrier Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.897 [INFO][4185] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.909 [INFO][4185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0 coredns-674b8bbfcf- kube-system 2288281e-fdb7-48d8-b727-f0cc9e2d198b 938 0 2026-01-17 00:20:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 coredns-674b8bbfcf-hv54k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60b8e7abe7f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.909 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.956 [INFO][4210] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" HandleID="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.957 [INFO][4210] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" HandleID="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"coredns-674b8bbfcf-hv54k", "timestamp":"2026-01-17 00:20:44.956878289 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.958 [INFO][4210] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.958 [INFO][4210] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.958 [INFO][4210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.968 [INFO][4210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.973 [INFO][4210] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.979 [INFO][4210] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.981 [INFO][4210] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.983 [INFO][4210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.983 [INFO][4210] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.985 [INFO][4210] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219 Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.989 [INFO][4210] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.994 [INFO][4210] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.130/26] block=192.168.115.128/26 handle="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.995 [INFO][4210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.130/26] handle="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.995 [INFO][4210] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:45.027479 containerd[1508]: 2026-01-17 00:20:44.995 [INFO][4210] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.130/26] IPv6=[] ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" HandleID="k8s-pod-network.8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.029094 containerd[1508]: 2026-01-17 00:20:45.001 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2288281e-fdb7-48d8-b727-f0cc9e2d198b", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"coredns-674b8bbfcf-hv54k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b8e7abe7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:45.029094 containerd[1508]: 2026-01-17 00:20:45.001 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.130/32] ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.029094 containerd[1508]: 2026-01-17 00:20:45.001 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60b8e7abe7f ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.029094 containerd[1508]: 2026-01-17 00:20:45.012 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.029094 containerd[1508]: 2026-01-17 00:20:45.012 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2288281e-fdb7-48d8-b727-f0cc9e2d198b", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219", Pod:"coredns-674b8bbfcf-hv54k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b8e7abe7f", MAC:"8a:3e:27:1a:db:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:45.029094 containerd[1508]: 2026-01-17 00:20:45.024 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219" Namespace="kube-system" Pod="coredns-674b8bbfcf-hv54k" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:20:45.046520 containerd[1508]: time="2026-01-17T00:20:45.046401875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:45.046630 containerd[1508]: time="2026-01-17T00:20:45.046532855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:45.046630 containerd[1508]: time="2026-01-17T00:20:45.046552625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:45.047065 containerd[1508]: time="2026-01-17T00:20:45.046682115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:45.080739 systemd[1]: Started cri-containerd-8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219.scope - libcontainer container 8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219. Jan 17 00:20:45.129214 systemd-networkd[1408]: cali2dda8058fdb: Link UP Jan 17 00:20:45.131457 systemd-networkd[1408]: cali2dda8058fdb: Gained carrier Jan 17 00:20:45.136232 containerd[1508]: time="2026-01-17T00:20:45.136153094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hv54k,Uid:2288281e-fdb7-48d8-b727-f0cc9e2d198b,Namespace:kube-system,Attempt:1,} returns sandbox id \"8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219\"" Jan 17 00:20:45.148548 containerd[1508]: time="2026-01-17T00:20:45.146821158Z" level=info msg="CreateContainer within sandbox \"8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.913 [INFO][4195] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.927 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0 calico-apiserver-79d8d794ff- calico-apiserver e8ec3d55-57ab-493d-b18c-44cba62fcddb 939 0 2026-01-17 00:20:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d8d794ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 calico-apiserver-79d8d794ff-xflgs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2dda8058fdb [] [] }} ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.927 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.982 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" HandleID="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.982 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" HandleID="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003233b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"calico-apiserver-79d8d794ff-xflgs", "timestamp":"2026-01-17 00:20:44.982195828 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.982 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.995 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:44.995 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.075 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.084 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.091 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.094 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.099 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.100 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.103 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67 Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.107 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.113 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.131/26] block=192.168.115.128/26 handle="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.114 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.131/26] handle="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.114 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:45.158723 containerd[1508]: 2026-01-17 00:20:45.114 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.131/26] IPv6=[] ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" HandleID="k8s-pod-network.6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.159443 containerd[1508]: 2026-01-17 00:20:45.124 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0", GenerateName:"calico-apiserver-79d8d794ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8ec3d55-57ab-493d-b18c-44cba62fcddb", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d8d794ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"calico-apiserver-79d8d794ff-xflgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dda8058fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:45.159443 containerd[1508]: 2026-01-17 00:20:45.124 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.131/32] ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.159443 containerd[1508]: 2026-01-17 00:20:45.124 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2dda8058fdb ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.159443 containerd[1508]: 2026-01-17 00:20:45.134 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.159443 containerd[1508]: 2026-01-17 00:20:45.137 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0", GenerateName:"calico-apiserver-79d8d794ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8ec3d55-57ab-493d-b18c-44cba62fcddb", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d8d794ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67", Pod:"calico-apiserver-79d8d794ff-xflgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dda8058fdb", MAC:"96:f7:47:d0:c8:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:45.159443 containerd[1508]: 2026-01-17 00:20:45.153 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67" Namespace="calico-apiserver" Pod="calico-apiserver-79d8d794ff-xflgs" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:20:45.167799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15158432.mount: Deactivated successfully. Jan 17 00:20:45.170158 containerd[1508]: time="2026-01-17T00:20:45.170129178Z" level=info msg="CreateContainer within sandbox \"8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20d7b4dc47d1d4ba8379339ba5f201b218e9865ac817e14ea1d89ef74dd258f9\"" Jan 17 00:20:45.170976 containerd[1508]: time="2026-01-17T00:20:45.170953008Z" level=info msg="StartContainer for \"20d7b4dc47d1d4ba8379339ba5f201b218e9865ac817e14ea1d89ef74dd258f9\"" Jan 17 00:20:45.193138 containerd[1508]: time="2026-01-17T00:20:45.193019388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:45.193988 containerd[1508]: time="2026-01-17T00:20:45.193801079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:45.194690 containerd[1508]: time="2026-01-17T00:20:45.194508659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:45.197334 containerd[1508]: time="2026-01-17T00:20:45.197304300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:45.210059 systemd[1]: Started cri-containerd-20d7b4dc47d1d4ba8379339ba5f201b218e9865ac817e14ea1d89ef74dd258f9.scope - libcontainer container 20d7b4dc47d1d4ba8379339ba5f201b218e9865ac817e14ea1d89ef74dd258f9. Jan 17 00:20:45.214800 systemd[1]: Started cri-containerd-6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67.scope - libcontainer container 6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67. Jan 17 00:20:45.240461 containerd[1508]: time="2026-01-17T00:20:45.240133429Z" level=info msg="StartContainer for \"20d7b4dc47d1d4ba8379339ba5f201b218e9865ac817e14ea1d89ef74dd258f9\" returns successfully" Jan 17 00:20:45.261551 containerd[1508]: time="2026-01-17T00:20:45.260807597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d8d794ff-xflgs,Uid:e8ec3d55-57ab-493d-b18c-44cba62fcddb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67\"" Jan 17 00:20:45.264379 containerd[1508]: time="2026-01-17T00:20:45.264178269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:45.667501 containerd[1508]: time="2026-01-17T00:20:45.667087442Z" level=info msg="StopPodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\"" Jan 17 00:20:45.688773 containerd[1508]: time="2026-01-17T00:20:45.688506981Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:45.690274 containerd[1508]: time="2026-01-17T00:20:45.690100332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:45.690274 containerd[1508]: time="2026-01-17T00:20:45.690196402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:45.690650 kubelet[2574]: E0117 00:20:45.690549 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:45.691218 kubelet[2574]: E0117 00:20:45.690661 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:45.691218 kubelet[2574]: E0117 00:20:45.690845 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79d8d794ff-xflgs_calico-apiserver(e8ec3d55-57ab-493d-b18c-44cba62fcddb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:45.692073 kubelet[2574]: E0117 00:20:45.691993 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.757 [INFO][4379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.758 [INFO][4379] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" iface="eth0" netns="/var/run/netns/cni-256b2e01-46de-227a-2cfa-45ad6f3fa6d0" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.760 [INFO][4379] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" iface="eth0" netns="/var/run/netns/cni-256b2e01-46de-227a-2cfa-45ad6f3fa6d0" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.761 [INFO][4379] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" iface="eth0" netns="/var/run/netns/cni-256b2e01-46de-227a-2cfa-45ad6f3fa6d0" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.761 [INFO][4379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.761 [INFO][4379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.807 [INFO][4386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.808 [INFO][4386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.808 [INFO][4386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.818 [WARNING][4386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.818 [INFO][4386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.821 [INFO][4386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:45.830532 containerd[1508]: 2026-01-17 00:20:45.825 [INFO][4379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:20:45.831444 containerd[1508]: time="2026-01-17T00:20:45.830801153Z" level=info msg="TearDown network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" successfully" Jan 17 00:20:45.831444 containerd[1508]: time="2026-01-17T00:20:45.830837823Z" level=info msg="StopPodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" returns successfully" Jan 17 00:20:45.831930 containerd[1508]: time="2026-01-17T00:20:45.831843843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d8j7,Uid:669c9dd2-93ed-4be5-8b4c-834706d32358,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:45.959113 kubelet[2574]: E0117 00:20:45.958058 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:20:45.965889 systemd[1]: run-netns-cni\x2d256b2e01\x2d46de\x2d227a\x2d2cfa\x2d45ad6f3fa6d0.mount: Deactivated successfully. Jan 17 00:20:46.021948 kubelet[2574]: I0117 00:20:46.021406 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hv54k" podStartSLOduration=39.021394285 podStartE2EDuration="39.021394285s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:45.991943281 +0000 UTC m=+46.468889550" watchObservedRunningTime="2026-01-17 00:20:46.021394285 +0000 UTC m=+46.498340524" Jan 17 00:20:46.060345 systemd-networkd[1408]: cali6260c0bf8dc: Link UP Jan 17 00:20:46.060493 systemd-networkd[1408]: cali6260c0bf8dc: Gained carrier Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.904 [INFO][4393] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.921 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0 csi-node-driver- calico-system 669c9dd2-93ed-4be5-8b4c-834706d32358 957 0 2026-01-17 00:20:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 csi-node-driver-2d8j7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6260c0bf8dc [] [] }} ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.922 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.995 [INFO][4404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" HandleID="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.995 [INFO][4404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" HandleID="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003039a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"csi-node-driver-2d8j7", "timestamp":"2026-01-17 00:20:45.995689583 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.995 [INFO][4404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.996 [INFO][4404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:45.996 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.010 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.016 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.025 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.034 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.038 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.039 [INFO][4404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.043 [INFO][4404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579 Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.048 [INFO][4404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.055 [INFO][4404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.132/26] block=192.168.115.128/26 handle="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.055 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.132/26] handle="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.055 [INFO][4404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:46.074140 containerd[1508]: 2026-01-17 00:20:46.055 [INFO][4404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.132/26] IPv6=[] ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" HandleID="k8s-pod-network.c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.075002 containerd[1508]: 2026-01-17 00:20:46.057 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"669c9dd2-93ed-4be5-8b4c-834706d32358", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"csi-node-driver-2d8j7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6260c0bf8dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:46.075002 containerd[1508]: 2026-01-17 00:20:46.057 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.132/32] ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.075002 containerd[1508]: 2026-01-17 00:20:46.057 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6260c0bf8dc ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.075002 containerd[1508]: 2026-01-17 00:20:46.060 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.075002 containerd[1508]: 2026-01-17 00:20:46.060 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"669c9dd2-93ed-4be5-8b4c-834706d32358", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579", Pod:"csi-node-driver-2d8j7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6260c0bf8dc", MAC:"ae:bb:86:48:c0:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:46.075002 containerd[1508]: 2026-01-17 00:20:46.070 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579" Namespace="calico-system" Pod="csi-node-driver-2d8j7" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:20:46.095257 containerd[1508]: time="2026-01-17T00:20:46.094901381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:46.095257 containerd[1508]: time="2026-01-17T00:20:46.094957181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:46.095257 containerd[1508]: time="2026-01-17T00:20:46.094968821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:46.095257 containerd[1508]: time="2026-01-17T00:20:46.095069631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:46.118967 systemd[1]: Started cri-containerd-c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579.scope - libcontainer container c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579. Jan 17 00:20:46.135377 kubelet[2574]: I0117 00:20:46.135312 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:46.153966 containerd[1508]: time="2026-01-17T00:20:46.153904709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d8j7,Uid:669c9dd2-93ed-4be5-8b4c-834706d32358,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579\"" Jan 17 00:20:46.156945 containerd[1508]: time="2026-01-17T00:20:46.156899840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:20:46.591075 containerd[1508]: time="2026-01-17T00:20:46.591013289Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:46.593247 containerd[1508]: time="2026-01-17T00:20:46.593184620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:20:46.593454 containerd[1508]: time="2026-01-17T00:20:46.593302590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:20:46.595834 kubelet[2574]: E0117 00:20:46.595764 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:20:46.596116 kubelet[2574]: E0117 00:20:46.595838 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:20:46.597975 kubelet[2574]: E0117 00:20:46.597772 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:46.603727 containerd[1508]: time="2026-01-17T00:20:46.602377854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:20:46.669690 containerd[1508]: time="2026-01-17T00:20:46.669182327Z" level=info msg="StopPodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\"" Jan 17 00:20:46.670767 containerd[1508]: time="2026-01-17T00:20:46.669209337Z" level=info msg="StopPodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\"" Jan 17 00:20:46.756879 systemd-networkd[1408]: cali60b8e7abe7f: Gained IPv6LL Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.793 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.793 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" iface="eth0" netns="/var/run/netns/cni-7a267673-e6c3-67c6-664b-0c402000014f" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.795 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" iface="eth0" netns="/var/run/netns/cni-7a267673-e6c3-67c6-664b-0c402000014f" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.796 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" iface="eth0" netns="/var/run/netns/cni-7a267673-e6c3-67c6-664b-0c402000014f" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.796 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.796 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.829 [INFO][4538] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.829 [INFO][4538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.829 [INFO][4538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.837 [WARNING][4538] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.837 [INFO][4538] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.839 [INFO][4538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:46.844848 containerd[1508]: 2026-01-17 00:20:46.841 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:20:46.847695 containerd[1508]: time="2026-01-17T00:20:46.847652823Z" level=info msg="TearDown network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" successfully" Jan 17 00:20:46.847834 containerd[1508]: time="2026-01-17T00:20:46.847814192Z" level=info msg="StopPodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" returns successfully" Jan 17 00:20:46.849648 containerd[1508]: time="2026-01-17T00:20:46.848722293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfsvt,Uid:fe9eb613-1b2e-4b40-8b1b-77be36bfdc32,Namespace:kube-system,Attempt:1,}" Jan 17 00:20:46.854444 systemd[1]: run-netns-cni\x2d7a267673\x2de6c3\x2d67c6\x2d664b\x2d0c402000014f.mount: Deactivated successfully. Jan 17 00:20:46.881981 systemd-networkd[1408]: cali2dda8058fdb: Gained IPv6LL Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.774 [INFO][4519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.774 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" iface="eth0" netns="/var/run/netns/cni-6228a1ad-8509-cbab-86ec-7fdf836ed80b" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.776 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" iface="eth0" netns="/var/run/netns/cni-6228a1ad-8509-cbab-86ec-7fdf836ed80b" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.777 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" iface="eth0" netns="/var/run/netns/cni-6228a1ad-8509-cbab-86ec-7fdf836ed80b" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.777 [INFO][4519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.778 [INFO][4519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.860 [INFO][4533] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.860 [INFO][4533] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.860 [INFO][4533] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.883 [WARNING][4533] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.883 [INFO][4533] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.886 [INFO][4533] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:46.892770 containerd[1508]: 2026-01-17 00:20:46.889 [INFO][4519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:20:46.893434 containerd[1508]: time="2026-01-17T00:20:46.893089255Z" level=info msg="TearDown network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" successfully" Jan 17 00:20:46.893434 containerd[1508]: time="2026-01-17T00:20:46.893111295Z" level=info msg="StopPodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" returns successfully" Jan 17 00:20:46.894336 containerd[1508]: time="2026-01-17T00:20:46.894321345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-t5pf2,Uid:ee43eed9-c394-4ae0-a0e3-7818f2df122b,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:20:46.953029 systemd[1]: run-netns-cni\x2d6228a1ad\x2d8509\x2dcbab\x2d86ec\x2d7fdf836ed80b.mount: Deactivated successfully. Jan 17 00:20:46.966637 kubelet[2574]: E0117 00:20:46.966108 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:20:47.040857 containerd[1508]: time="2026-01-17T00:20:47.040823857Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:47.042772 containerd[1508]: time="2026-01-17T00:20:47.042747408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:20:47.043562 kubelet[2574]: E0117 00:20:47.043508 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:20:47.043661 kubelet[2574]: E0117 00:20:47.043567 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:20:47.044112 kubelet[2574]: E0117 00:20:47.043726 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:47.044241 containerd[1508]: time="2026-01-17T00:20:47.042837308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:20:47.045526 kubelet[2574]: E0117 00:20:47.045389 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:47.063293 systemd-networkd[1408]: cali283289cd406: Link UP Jan 17 00:20:47.063612 systemd-networkd[1408]: cali283289cd406: Gained carrier Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.914 [INFO][4547] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.938 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0 coredns-674b8bbfcf- kube-system fe9eb613-1b2e-4b40-8b1b-77be36bfdc32 988 0 2026-01-17 00:20:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 coredns-674b8bbfcf-tfsvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali283289cd406 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.938 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.992 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" HandleID="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.992 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" HandleID="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5290), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"coredns-674b8bbfcf-tfsvt", "timestamp":"2026-01-17 00:20:46.992317562 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.992 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.993 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:46.993 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.001 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.036 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.040 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.042 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.044 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.045 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.047 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.051 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.054 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.133/26] block=192.168.115.128/26 handle="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.054 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.133/26] handle="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.055 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:47.073942 containerd[1508]: 2026-01-17 00:20:47.055 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.133/26] IPv6=[] ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" HandleID="k8s-pod-network.a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.075000 containerd[1508]: 2026-01-17 00:20:47.057 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"coredns-674b8bbfcf-tfsvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283289cd406", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:47.075000 containerd[1508]: 2026-01-17 00:20:47.057 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.133/32] ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.075000 containerd[1508]: 2026-01-17 00:20:47.057 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali283289cd406 ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.075000 containerd[1508]: 2026-01-17 00:20:47.063 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.075000 containerd[1508]: 2026-01-17 00:20:47.064 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf", Pod:"coredns-674b8bbfcf-tfsvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283289cd406", MAC:"2a:b2:bd:6b:b9:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:47.075000 containerd[1508]: 2026-01-17 00:20:47.070 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-tfsvt" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:20:47.092165 containerd[1508]: time="2026-01-17T00:20:47.092086184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:47.092395 containerd[1508]: time="2026-01-17T00:20:47.092334915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:47.092643 containerd[1508]: time="2026-01-17T00:20:47.092504674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:47.093305 containerd[1508]: time="2026-01-17T00:20:47.093284175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:47.119709 systemd[1]: Started cri-containerd-a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf.scope - libcontainer container a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf. Jan 17 00:20:47.168715 systemd-networkd[1408]: calie5e80a0a252: Link UP Jan 17 00:20:47.168918 systemd-networkd[1408]: calie5e80a0a252: Gained carrier Jan 17 00:20:47.177235 containerd[1508]: time="2026-01-17T00:20:47.176957399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfsvt,Uid:fe9eb613-1b2e-4b40-8b1b-77be36bfdc32,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf\"" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:46.937 [INFO][4562] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:46.953 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0 calico-apiserver-7b598cf86d- calico-apiserver ee43eed9-c394-4ae0-a0e3-7818f2df122b 987 0 2026-01-17 00:20:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b598cf86d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 calico-apiserver-7b598cf86d-t5pf2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5e80a0a252 [] [] }} ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:46.958 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.001 [INFO][4579] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" HandleID="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.002 [INFO][4579] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" HandleID="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"calico-apiserver-7b598cf86d-t5pf2", "timestamp":"2026-01-17 00:20:47.001638016 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.002 [INFO][4579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.054 [INFO][4579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.054 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.106 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.136 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.142 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.144 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.146 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.146 [INFO][4579] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.147 [INFO][4579] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5 Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.151 [INFO][4579] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.160 [INFO][4579] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.134/26] block=192.168.115.128/26 handle="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.161 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.134/26] handle="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.161 [INFO][4579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:47.180721 containerd[1508]: 2026-01-17 00:20:47.161 [INFO][4579] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.134/26] IPv6=[] ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" HandleID="k8s-pod-network.850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.181134 containerd[1508]: 2026-01-17 00:20:47.164 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee43eed9-c394-4ae0-a0e3-7818f2df122b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"calico-apiserver-7b598cf86d-t5pf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e80a0a252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:47.181134 containerd[1508]: 2026-01-17 00:20:47.165 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.134/32] ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.181134 containerd[1508]: 2026-01-17 00:20:47.165 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5e80a0a252 ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.181134 containerd[1508]: 2026-01-17 00:20:47.167 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.181134 containerd[1508]: 2026-01-17 00:20:47.167 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee43eed9-c394-4ae0-a0e3-7818f2df122b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5", Pod:"calico-apiserver-7b598cf86d-t5pf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e80a0a252", MAC:"76:81:f6:80:44:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:47.181134 containerd[1508]: 2026-01-17 00:20:47.177 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-t5pf2" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:20:47.188153 containerd[1508]: time="2026-01-17T00:20:47.187813735Z" level=info msg="CreateContainer within sandbox \"a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:20:47.203689 kernel: bpftool[4671]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:20:47.220322 containerd[1508]: time="2026-01-17T00:20:47.218188311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:47.220322 containerd[1508]: time="2026-01-17T00:20:47.220045672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:47.220322 containerd[1508]: time="2026-01-17T00:20:47.220056702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:47.220322 containerd[1508]: time="2026-01-17T00:20:47.220264862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:47.222893 containerd[1508]: time="2026-01-17T00:20:47.222810264Z" level=info msg="CreateContainer within sandbox \"a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22f63cf24c4dbcbf0d4146c0b777020c6e2f76221395ef68149c161a45d88b40\"" Jan 17 00:20:47.224777 containerd[1508]: time="2026-01-17T00:20:47.224729004Z" level=info msg="StartContainer for \"22f63cf24c4dbcbf0d4146c0b777020c6e2f76221395ef68149c161a45d88b40\"" Jan 17 00:20:47.245725 systemd[1]: Started cri-containerd-850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5.scope - libcontainer container 850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5. Jan 17 00:20:47.251100 systemd[1]: Started cri-containerd-22f63cf24c4dbcbf0d4146c0b777020c6e2f76221395ef68149c161a45d88b40.scope - libcontainer container 22f63cf24c4dbcbf0d4146c0b777020c6e2f76221395ef68149c161a45d88b40. Jan 17 00:20:47.280119 containerd[1508]: time="2026-01-17T00:20:47.280071873Z" level=info msg="StartContainer for \"22f63cf24c4dbcbf0d4146c0b777020c6e2f76221395ef68149c161a45d88b40\" returns successfully" Jan 17 00:20:47.335920 containerd[1508]: time="2026-01-17T00:20:47.335115433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-t5pf2,Uid:ee43eed9-c394-4ae0-a0e3-7818f2df122b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5\"" Jan 17 00:20:47.336459 containerd[1508]: time="2026-01-17T00:20:47.336417584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:47.478887 systemd-networkd[1408]: vxlan.calico: Link UP Jan 17 00:20:47.478894 systemd-networkd[1408]: vxlan.calico: Gained carrier Jan 17 00:20:47.770737 containerd[1508]: time="2026-01-17T00:20:47.759581578Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:47.770737 containerd[1508]: time="2026-01-17T00:20:47.761441568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:47.770737 containerd[1508]: time="2026-01-17T00:20:47.761493868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:47.770946 kubelet[2574]: E0117 00:20:47.761668 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:47.770946 kubelet[2574]: E0117 00:20:47.761733 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:47.770946 kubelet[2574]: E0117 00:20:47.762938 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmmfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-t5pf2_calico-apiserver(ee43eed9-c394-4ae0-a0e3-7818f2df122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:47.770946 kubelet[2574]: E0117 00:20:47.764068 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:20:47.982198 kubelet[2574]: E0117 00:20:47.981083 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:20:47.982198 kubelet[2574]: E0117 00:20:47.981592 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:20:48.025790 kubelet[2574]: I0117 00:20:48.025500 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tfsvt" podStartSLOduration=41.025480129 podStartE2EDuration="41.025480129s" podCreationTimestamp="2026-01-17 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:47.999840554 +0000 UTC m=+48.476786833" watchObservedRunningTime="2026-01-17 00:20:48.025480129 +0000 UTC m=+48.502426408" Jan 17 00:20:48.034363 systemd-networkd[1408]: cali6260c0bf8dc: Gained IPv6LL Jan 17 00:20:48.667316 containerd[1508]: time="2026-01-17T00:20:48.666691668Z" level=info msg="StopPodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\"" Jan 17 00:20:48.668060 containerd[1508]: time="2026-01-17T00:20:48.667649108Z" level=info msg="StopPodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\"" Jan 17 00:20:48.674816 systemd-networkd[1408]: calie5e80a0a252: Gained IPv6LL Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.772 [INFO][4833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.772 [INFO][4833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" iface="eth0" netns="/var/run/netns/cni-98f5ffbc-c388-3fd9-8ffa-44ca763f948b" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.773 [INFO][4833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" iface="eth0" netns="/var/run/netns/cni-98f5ffbc-c388-3fd9-8ffa-44ca763f948b" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.773 [INFO][4833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" iface="eth0" netns="/var/run/netns/cni-98f5ffbc-c388-3fd9-8ffa-44ca763f948b" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.773 [INFO][4833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.773 [INFO][4833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.832 [INFO][4852] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.832 [INFO][4852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.833 [INFO][4852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.844 [WARNING][4852] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.844 [INFO][4852] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.846 [INFO][4852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:48.849712 containerd[1508]: 2026-01-17 00:20:48.848 [INFO][4833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:48.852298 containerd[1508]: time="2026-01-17T00:20:48.851721404Z" level=info msg="TearDown network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" successfully" Jan 17 00:20:48.852298 containerd[1508]: time="2026-01-17T00:20:48.851743954Z" level=info msg="StopPodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" returns successfully" Jan 17 00:20:48.852949 containerd[1508]: time="2026-01-17T00:20:48.852910154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7779db755c-krrrf,Uid:7b9ac0b2-c7c5-4408-8470-3fecd940db64,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:48.854104 systemd[1]: run-netns-cni\x2d98f5ffbc\x2dc388\x2d3fd9\x2d8ffa\x2d44ca763f948b.mount: Deactivated successfully. Jan 17 00:20:48.866300 systemd-networkd[1408]: cali283289cd406: Gained IPv6LL Jan 17 00:20:48.866683 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.769 [INFO][4837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.769 [INFO][4837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" iface="eth0" netns="/var/run/netns/cni-6be2f35b-f80a-b005-66d6-5ac300c73112" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.771 [INFO][4837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" iface="eth0" netns="/var/run/netns/cni-6be2f35b-f80a-b005-66d6-5ac300c73112" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.772 [INFO][4837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" iface="eth0" netns="/var/run/netns/cni-6be2f35b-f80a-b005-66d6-5ac300c73112" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.772 [INFO][4837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.772 [INFO][4837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.838 [INFO][4850] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.838 [INFO][4850] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.846 [INFO][4850] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.859 [WARNING][4850] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.859 [INFO][4850] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.861 [INFO][4850] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:48.871576 containerd[1508]: 2026-01-17 00:20:48.863 [INFO][4837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:20:48.872785 containerd[1508]: time="2026-01-17T00:20:48.871666585Z" level=info msg="TearDown network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" successfully" Jan 17 00:20:48.872785 containerd[1508]: time="2026-01-17T00:20:48.871973725Z" level=info msg="StopPodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" returns successfully" Jan 17 00:20:48.877657 containerd[1508]: time="2026-01-17T00:20:48.877283358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw7xc,Uid:d3748345-d737-4edc-b312-ed0fa45e5e25,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:48.879151 systemd[1]: run-netns-cni\x2d6be2f35b\x2df80a\x2db005\x2d66d6\x2d5ac300c73112.mount: Deactivated successfully. Jan 17 00:20:48.978954 kubelet[2574]: E0117 00:20:48.978856 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:20:49.004876 systemd-networkd[1408]: calif3855ca4e98: Link UP Jan 17 00:20:49.006200 systemd-networkd[1408]: calif3855ca4e98: Gained carrier Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.919 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0 calico-kube-controllers-7779db755c- calico-system 7b9ac0b2-c7c5-4408-8470-3fecd940db64 1032 0 2026-01-17 00:20:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7779db755c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 calico-kube-controllers-7779db755c-krrrf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif3855ca4e98 [] [] }} ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.919 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4889] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" HandleID="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4889] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" HandleID="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"calico-kube-controllers-7779db755c-krrrf", "timestamp":"2026-01-17 00:20:48.944140106 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.950 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.954 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.957 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.959 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.962 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.962 [INFO][4889] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.967 [INFO][4889] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.978 [INFO][4889] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.991 [INFO][4889] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.135/26] block=192.168.115.128/26 handle="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.991 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.135/26] handle="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.991 [INFO][4889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.024783 containerd[1508]: 2026-01-17 00:20:48.991 [INFO][4889] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.135/26] IPv6=[] ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" HandleID="k8s-pod-network.a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.025201 containerd[1508]: 2026-01-17 00:20:48.994 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0", GenerateName:"calico-kube-controllers-7779db755c-", Namespace:"calico-system", SelfLink:"", UID:"7b9ac0b2-c7c5-4408-8470-3fecd940db64", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7779db755c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"calico-kube-controllers-7779db755c-krrrf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3855ca4e98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.025201 containerd[1508]: 2026-01-17 00:20:48.994 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.135/32] ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.025201 containerd[1508]: 2026-01-17 00:20:48.994 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3855ca4e98 ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.025201 containerd[1508]: 2026-01-17 00:20:49.005 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.025201 containerd[1508]: 2026-01-17 00:20:49.006 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0", GenerateName:"calico-kube-controllers-7779db755c-", Namespace:"calico-system", SelfLink:"", UID:"7b9ac0b2-c7c5-4408-8470-3fecd940db64", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7779db755c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a", Pod:"calico-kube-controllers-7779db755c-krrrf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3855ca4e98", MAC:"56:75:ae:26:dd:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.025201 containerd[1508]: 2026-01-17 00:20:49.021 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a" Namespace="calico-system" Pod="calico-kube-controllers-7779db755c-krrrf" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:49.061977 containerd[1508]: time="2026-01-17T00:20:49.060411566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:49.061977 containerd[1508]: time="2026-01-17T00:20:49.060450556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:49.061977 containerd[1508]: time="2026-01-17T00:20:49.060458526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.061977 containerd[1508]: time="2026-01-17T00:20:49.060516206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.093757 systemd[1]: Started cri-containerd-a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a.scope - libcontainer container a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a. Jan 17 00:20:49.102243 systemd-networkd[1408]: cali97675bebccd: Link UP Jan 17 00:20:49.104455 systemd-networkd[1408]: cali97675bebccd: Gained carrier Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.924 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0 goldmane-666569f655- calico-system d3748345-d737-4edc-b312-ed0fa45e5e25 1031 0 2026-01-17 00:20:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 goldmane-666569f655-fw7xc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali97675bebccd [] [] }} ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.924 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4894] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" HandleID="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4894] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" HandleID="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"goldmane-666569f655-fw7xc", "timestamp":"2026-01-17 00:20:48.944309997 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.944 [INFO][4894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.991 [INFO][4894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:48.991 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.051 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.058 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.068 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.070 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.073 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.073 [INFO][4894] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.076 [INFO][4894] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14 Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.081 [INFO][4894] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.089 [INFO][4894] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.136/26] block=192.168.115.128/26 handle="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.089 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.136/26] handle="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.089 [INFO][4894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.125644 containerd[1508]: 2026-01-17 00:20:49.089 [INFO][4894] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.136/26] IPv6=[] ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" HandleID="k8s-pod-network.14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.126911 containerd[1508]: 2026-01-17 00:20:49.093 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d3748345-d737-4edc-b312-ed0fa45e5e25", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"goldmane-666569f655-fw7xc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97675bebccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.126911 containerd[1508]: 2026-01-17 00:20:49.093 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.136/32] ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.126911 containerd[1508]: 2026-01-17 00:20:49.093 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97675bebccd ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.126911 containerd[1508]: 2026-01-17 00:20:49.105 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.126911 containerd[1508]: 2026-01-17 00:20:49.106 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d3748345-d737-4edc-b312-ed0fa45e5e25", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14", Pod:"goldmane-666569f655-fw7xc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97675bebccd", MAC:"2e:4a:66:e4:de:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:49.126911 containerd[1508]: 2026-01-17 00:20:49.115 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14" Namespace="calico-system" Pod="goldmane-666569f655-fw7xc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:20:49.145673 containerd[1508]: time="2026-01-17T00:20:49.145173748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:49.145673 containerd[1508]: time="2026-01-17T00:20:49.145356198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:49.145673 containerd[1508]: time="2026-01-17T00:20:49.145404288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.146021 containerd[1508]: time="2026-01-17T00:20:49.145539218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:49.197780 systemd[1]: Started cri-containerd-14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14.scope - libcontainer container 14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14. Jan 17 00:20:49.200038 containerd[1508]: time="2026-01-17T00:20:49.199811012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7779db755c-krrrf,Uid:7b9ac0b2-c7c5-4408-8470-3fecd940db64,Namespace:calico-system,Attempt:1,} returns sandbox id \"a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a\"" Jan 17 00:20:49.201377 containerd[1508]: time="2026-01-17T00:20:49.201337993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:20:49.240027 containerd[1508]: time="2026-01-17T00:20:49.238175925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw7xc,Uid:d3748345-d737-4edc-b312-ed0fa45e5e25,Namespace:calico-system,Attempt:1,} returns sandbox id \"14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14\"" Jan 17 00:20:49.632945 containerd[1508]: time="2026-01-17T00:20:49.632741448Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:49.634773 containerd[1508]: time="2026-01-17T00:20:49.634689269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:20:49.634890 containerd[1508]: time="2026-01-17T00:20:49.634799310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:49.635134 kubelet[2574]: E0117 00:20:49.635005 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:20:49.635134 kubelet[2574]: E0117 00:20:49.635075 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:20:49.638119 kubelet[2574]: E0117 00:20:49.635348 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7779db755c-krrrf_calico-system(7b9ac0b2-c7c5-4408-8470-3fecd940db64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:49.638289 containerd[1508]: time="2026-01-17T00:20:49.635707290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:20:49.639668 kubelet[2574]: E0117 00:20:49.639262 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:20:49.669910 containerd[1508]: time="2026-01-17T00:20:49.669835101Z" level=info msg="StopPodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\"" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.751 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.751 [INFO][5014] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" iface="eth0" netns="/var/run/netns/cni-53ec8935-961e-831a-394f-de16b189f6d5" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.751 [INFO][5014] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" iface="eth0" netns="/var/run/netns/cni-53ec8935-961e-831a-394f-de16b189f6d5" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.756 [INFO][5014] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" iface="eth0" netns="/var/run/netns/cni-53ec8935-961e-831a-394f-de16b189f6d5" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.756 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.756 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.799 [INFO][5022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.800 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.800 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.810 [WARNING][5022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.810 [INFO][5022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.813 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:49.821574 containerd[1508]: 2026-01-17 00:20:49.816 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:20:49.821574 containerd[1508]: time="2026-01-17T00:20:49.820202414Z" level=info msg="TearDown network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" successfully" Jan 17 00:20:49.821574 containerd[1508]: time="2026-01-17T00:20:49.820246874Z" level=info msg="StopPodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" returns successfully" Jan 17 00:20:49.823249 containerd[1508]: time="2026-01-17T00:20:49.823173686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-jkqzc,Uid:10c4610a-ed07-4e29-932b-b9ab7749e6ed,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:20:49.827461 systemd[1]: run-netns-cni\x2d53ec8935\x2d961e\x2d831a\x2d394f\x2dde16b189f6d5.mount: Deactivated successfully. Jan 17 00:20:49.987287 kubelet[2574]: E0117 00:20:49.986918 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:20:50.049941 systemd-networkd[1408]: cali6d2cdf5265d: Link UP Jan 17 00:20:50.052385 systemd-networkd[1408]: cali6d2cdf5265d: Gained carrier Jan 17 00:20:50.067486 containerd[1508]: time="2026-01-17T00:20:50.067382819Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:50.069788 containerd[1508]: time="2026-01-17T00:20:50.069695280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:20:50.069853 containerd[1508]: time="2026-01-17T00:20:50.069820890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:50.070964 kubelet[2574]: E0117 00:20:50.070914 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:20:50.071033 kubelet[2574]: E0117 00:20:50.070971 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:20:50.072175 kubelet[2574]: E0117 00:20:50.071170 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ck7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw7xc_calico-system(d3748345-d737-4edc-b312-ed0fa45e5e25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:50.073285 kubelet[2574]: E0117 00:20:50.073162 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.908 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0 calico-apiserver-7b598cf86d- calico-apiserver 10c4610a-ed07-4e29-932b-b9ab7749e6ed 1049 0 2026-01-17 00:20:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b598cf86d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8c81c3eeb1 calico-apiserver-7b598cf86d-jkqzc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d2cdf5265d [] [] }} ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.908 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.951 [INFO][5044] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" HandleID="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.952 [INFO][5044] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" HandleID="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-8c81c3eeb1", "pod":"calico-apiserver-7b598cf86d-jkqzc", "timestamp":"2026-01-17 00:20:49.951896285 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8c81c3eeb1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.954 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.955 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.955 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8c81c3eeb1' Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.967 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.975 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.983 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.987 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.995 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:49.995 [INFO][5044] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:50.003 [INFO][5044] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:50.013 [INFO][5044] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:50.026 [INFO][5044] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.137/26] block=192.168.115.128/26 handle="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:50.026 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.137/26] handle="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" host="ci-4081-3-6-n-8c81c3eeb1" Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:50.026 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:50.083269 containerd[1508]: 2026-01-17 00:20:50.026 [INFO][5044] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.137/26] IPv6=[] ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" HandleID="k8s-pod-network.410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.084424 containerd[1508]: 2026-01-17 00:20:50.032 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"10c4610a-ed07-4e29-932b-b9ab7749e6ed", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"", Pod:"calico-apiserver-7b598cf86d-jkqzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d2cdf5265d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:50.084424 containerd[1508]: 2026-01-17 00:20:50.032 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.137/32] ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.084424 containerd[1508]: 2026-01-17 00:20:50.033 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d2cdf5265d ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.084424 containerd[1508]: 2026-01-17 00:20:50.051 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.084424 containerd[1508]: 2026-01-17 00:20:50.060 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"10c4610a-ed07-4e29-932b-b9ab7749e6ed", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f", Pod:"calico-apiserver-7b598cf86d-jkqzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d2cdf5265d", MAC:"56:54:88:a6:54:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:50.084424 containerd[1508]: 2026-01-17 00:20:50.073 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f" Namespace="calico-apiserver" Pod="calico-apiserver-7b598cf86d-jkqzc" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:20:50.111457 containerd[1508]: time="2026-01-17T00:20:50.111237468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:50.111457 containerd[1508]: time="2026-01-17T00:20:50.111277148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:50.111457 containerd[1508]: time="2026-01-17T00:20:50.111286978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:50.111457 containerd[1508]: time="2026-01-17T00:20:50.111381688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:50.140212 systemd[1]: Started cri-containerd-410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f.scope - libcontainer container 410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f. Jan 17 00:20:50.188313 containerd[1508]: time="2026-01-17T00:20:50.188209078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b598cf86d-jkqzc,Uid:10c4610a-ed07-4e29-932b-b9ab7749e6ed,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f\"" Jan 17 00:20:50.190943 containerd[1508]: time="2026-01-17T00:20:50.190517780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:50.529896 systemd-networkd[1408]: calif3855ca4e98: Gained IPv6LL Jan 17 00:20:50.625360 containerd[1508]: time="2026-01-17T00:20:50.624858104Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:50.628564 containerd[1508]: time="2026-01-17T00:20:50.628243826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:50.628564 containerd[1508]: time="2026-01-17T00:20:50.628377007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:50.630017 kubelet[2574]: E0117 00:20:50.629511 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:50.630017 kubelet[2574]: E0117 00:20:50.629575 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:50.630017 kubelet[2574]: E0117 00:20:50.629798 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vcbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-jkqzc_calico-apiserver(10c4610a-ed07-4e29-932b-b9ab7749e6ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:50.631645 kubelet[2574]: E0117 00:20:50.631517 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:20:50.978288 systemd-networkd[1408]: cali97675bebccd: Gained IPv6LL Jan 17 00:20:51.005361 kubelet[2574]: E0117 00:20:51.004999 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:20:51.005361 kubelet[2574]: E0117 00:20:51.005159 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:20:51.007685 kubelet[2574]: E0117 00:20:51.006343 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:20:51.299741 systemd-networkd[1408]: cali6d2cdf5265d: Gained IPv6LL Jan 17 00:20:52.005342 kubelet[2574]: E0117 00:20:52.004999 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:20:55.671644 containerd[1508]: time="2026-01-17T00:20:55.671453064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:20:56.114568 containerd[1508]: time="2026-01-17T00:20:56.114328571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:56.116664 containerd[1508]: time="2026-01-17T00:20:56.116573752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:20:56.116806 containerd[1508]: time="2026-01-17T00:20:56.116726863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:20:56.116983 kubelet[2574]: E0117 00:20:56.116930 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:56.117540 kubelet[2574]: E0117 00:20:56.116989 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:56.117540 kubelet[2574]: E0117 00:20:56.117120 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12f2f0817a3b40168af76823b3573c15,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:56.119878 containerd[1508]: time="2026-01-17T00:20:56.119772334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:20:56.542973 containerd[1508]: time="2026-01-17T00:20:56.542863793Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:56.544951 containerd[1508]: time="2026-01-17T00:20:56.544863725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:20:56.545060 containerd[1508]: time="2026-01-17T00:20:56.544994665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:56.545398 kubelet[2574]: E0117 00:20:56.545321 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:56.545398 kubelet[2574]: E0117 00:20:56.545388 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:56.546186 kubelet[2574]: E0117 00:20:56.546093 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:56.547374 kubelet[2574]: E0117 00:20:56.547331 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:20:57.668933 containerd[1508]: time="2026-01-17T00:20:57.667958392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:58.101640 containerd[1508]: time="2026-01-17T00:20:58.101424652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:58.103497 containerd[1508]: time="2026-01-17T00:20:58.103427294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:58.103878 containerd[1508]: time="2026-01-17T00:20:58.103530504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:58.104087 kubelet[2574]: E0117 00:20:58.103822 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:58.104087 kubelet[2574]: E0117 00:20:58.103883 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:58.105934 kubelet[2574]: E0117 00:20:58.104081 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79d8d794ff-xflgs_calico-apiserver(e8ec3d55-57ab-493d-b18c-44cba62fcddb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:58.105934 kubelet[2574]: E0117 00:20:58.105367 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:20:59.658926 containerd[1508]: time="2026-01-17T00:20:59.658822811Z" level=info msg="StopPodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\"" Jan 17 00:20:59.682550 containerd[1508]: time="2026-01-17T00:20:59.680893471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.740 [WARNING][5124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0", GenerateName:"calico-kube-controllers-7779db755c-", Namespace:"calico-system", SelfLink:"", UID:"7b9ac0b2-c7c5-4408-8470-3fecd940db64", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7779db755c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a", Pod:"calico-kube-controllers-7779db755c-krrrf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3855ca4e98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.740 [INFO][5124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.740 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" iface="eth0" netns="" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.740 [INFO][5124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.740 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.769 [INFO][5134] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.769 [INFO][5134] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.769 [INFO][5134] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.777 [WARNING][5134] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.777 [INFO][5134] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.779 [INFO][5134] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:59.790557 containerd[1508]: 2026-01-17 00:20:59.785 [INFO][5124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.791246 containerd[1508]: time="2026-01-17T00:20:59.790700163Z" level=info msg="TearDown network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" successfully" Jan 17 00:20:59.791246 containerd[1508]: time="2026-01-17T00:20:59.790749183Z" level=info msg="StopPodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" returns successfully" Jan 17 00:20:59.792055 containerd[1508]: time="2026-01-17T00:20:59.791998573Z" level=info msg="RemovePodSandbox for \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\"" Jan 17 00:20:59.792055 containerd[1508]: time="2026-01-17T00:20:59.792045093Z" level=info msg="Forcibly stopping sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\"" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.848 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0", GenerateName:"calico-kube-controllers-7779db755c-", Namespace:"calico-system", SelfLink:"", UID:"7b9ac0b2-c7c5-4408-8470-3fecd940db64", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7779db755c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"a5daa5b50a00dd3411643aa8e2d02441f500023188d1975e96e0d6ea8472459a", Pod:"calico-kube-controllers-7779db755c-krrrf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3855ca4e98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.849 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.849 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" iface="eth0" netns="" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.849 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.849 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.881 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.882 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.882 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.894 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.894 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" HandleID="k8s-pod-network.90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--kube--controllers--7779db755c--krrrf-eth0" Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.896 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:59.905253 containerd[1508]: 2026-01-17 00:20:59.902 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef" Jan 17 00:20:59.905253 containerd[1508]: time="2026-01-17T00:20:59.905160778Z" level=info msg="TearDown network for sandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" successfully" Jan 17 00:20:59.912282 containerd[1508]: time="2026-01-17T00:20:59.912119394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:20:59.912282 containerd[1508]: time="2026-01-17T00:20:59.912221574Z" level=info msg="RemovePodSandbox \"90cc1ed5428f36012cab524984c81511401ebd05044ce63ca683ea0c091a2eef\" returns successfully" Jan 17 00:20:59.913342 containerd[1508]: time="2026-01-17T00:20:59.912997165Z" level=info msg="StopPodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\"" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:20:59.974 [WARNING][5170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"669c9dd2-93ed-4be5-8b4c-834706d32358", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579", Pod:"csi-node-driver-2d8j7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6260c0bf8dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:20:59.975 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:20:59.977 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" iface="eth0" netns="" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:20:59.977 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:20:59.977 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.023 [INFO][5177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.023 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.023 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.035 [WARNING][5177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.035 [INFO][5177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.037 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.046256 containerd[1508]: 2026-01-17 00:21:00.041 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.047254 containerd[1508]: time="2026-01-17T00:21:00.047173958Z" level=info msg="TearDown network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" successfully" Jan 17 00:21:00.047254 containerd[1508]: time="2026-01-17T00:21:00.047222228Z" level=info msg="StopPodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" returns successfully" Jan 17 00:21:00.048574 containerd[1508]: time="2026-01-17T00:21:00.048103599Z" level=info msg="RemovePodSandbox for \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\"" Jan 17 00:21:00.048574 containerd[1508]: time="2026-01-17T00:21:00.048166730Z" level=info msg="Forcibly stopping sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\"" Jan 17 00:21:00.124456 containerd[1508]: time="2026-01-17T00:21:00.124393301Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:00.126956 containerd[1508]: time="2026-01-17T00:21:00.125980663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:21:00.126956 containerd[1508]: time="2026-01-17T00:21:00.126062053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:21:00.127080 kubelet[2574]: E0117 00:21:00.126331 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:00.127080 kubelet[2574]: E0117 00:21:00.126428 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:00.127080 kubelet[2574]: E0117 00:21:00.126558 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:00.134275 containerd[1508]: time="2026-01-17T00:21:00.132817969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.107 [WARNING][5191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"669c9dd2-93ed-4be5-8b4c-834706d32358", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"c8efc4d25e0effa9acfdbbfe1163c1d5f615c80bc856f0c3ce534a6261057579", Pod:"csi-node-driver-2d8j7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6260c0bf8dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.107 [INFO][5191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.107 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" iface="eth0" netns="" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.107 [INFO][5191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.107 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.146 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.148 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.148 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.158 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.158 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" HandleID="k8s-pod-network.ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-csi--node--driver--2d8j7-eth0" Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.160 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.167551 containerd[1508]: 2026-01-17 00:21:00.163 [INFO][5191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733" Jan 17 00:21:00.168722 containerd[1508]: time="2026-01-17T00:21:00.168204073Z" level=info msg="TearDown network for sandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" successfully" Jan 17 00:21:00.173050 containerd[1508]: time="2026-01-17T00:21:00.172989407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:00.173171 containerd[1508]: time="2026-01-17T00:21:00.173081737Z" level=info msg="RemovePodSandbox \"ca913b8cdb9c4d961ada3b5334f0bbf68cb16604cc3f829a99c34a117efd0733\" returns successfully" Jan 17 00:21:00.173841 containerd[1508]: time="2026-01-17T00:21:00.173734518Z" level=info msg="StopPodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\"" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.238 [WARNING][5213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf", Pod:"coredns-674b8bbfcf-tfsvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283289cd406", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.238 [INFO][5213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.238 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" iface="eth0" netns="" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.238 [INFO][5213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.238 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.264 [INFO][5223] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.264 [INFO][5223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.265 [INFO][5223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.274 [WARNING][5223] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.274 [INFO][5223] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.276 [INFO][5223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.285158 containerd[1508]: 2026-01-17 00:21:00.279 [INFO][5213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.286855 containerd[1508]: time="2026-01-17T00:21:00.285712092Z" level=info msg="TearDown network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" successfully" Jan 17 00:21:00.286855 containerd[1508]: time="2026-01-17T00:21:00.285779553Z" level=info msg="StopPodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" returns successfully" Jan 17 00:21:00.288848 containerd[1508]: time="2026-01-17T00:21:00.287935264Z" level=info msg="RemovePodSandbox for \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\"" Jan 17 00:21:00.288848 containerd[1508]: time="2026-01-17T00:21:00.287984034Z" level=info msg="Forcibly stopping sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\"" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.356 [WARNING][5239] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe9eb613-1b2e-4b40-8b1b-77be36bfdc32", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"a8e251aa3a8bd06fb3f4b71e0720bc08ad272f493e7b4ea414cb934914b436cf", Pod:"coredns-674b8bbfcf-tfsvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283289cd406", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.357 [INFO][5239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.357 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" iface="eth0" netns="" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.357 [INFO][5239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.357 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.393 [INFO][5247] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.394 [INFO][5247] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.394 [INFO][5247] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.404 [WARNING][5247] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.405 [INFO][5247] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" HandleID="k8s-pod-network.62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--tfsvt-eth0" Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.407 [INFO][5247] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.416730 containerd[1508]: 2026-01-17 00:21:00.411 [INFO][5239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951" Jan 17 00:21:00.416730 containerd[1508]: time="2026-01-17T00:21:00.415835005Z" level=info msg="TearDown network for sandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" successfully" Jan 17 00:21:00.426521 containerd[1508]: time="2026-01-17T00:21:00.426350224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:00.426521 containerd[1508]: time="2026-01-17T00:21:00.426440534Z" level=info msg="RemovePodSandbox \"62f1b8431cc16e6b7787c7f9ecd056bab59ee1ba89ab6b9b381c29f0c45d1951\" returns successfully" Jan 17 00:21:00.429944 containerd[1508]: time="2026-01-17T00:21:00.429407877Z" level=info msg="StopPodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\"" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.482 [WARNING][5261] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.482 [INFO][5261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.482 [INFO][5261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" iface="eth0" netns="" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.482 [INFO][5261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.482 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.521 [INFO][5268] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.522 [INFO][5268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.522 [INFO][5268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.532 [WARNING][5268] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.532 [INFO][5268] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.536 [INFO][5268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.541718 containerd[1508]: 2026-01-17 00:21:00.537 [INFO][5261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.542049 containerd[1508]: time="2026-01-17T00:21:00.541777173Z" level=info msg="TearDown network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" successfully" Jan 17 00:21:00.542049 containerd[1508]: time="2026-01-17T00:21:00.541815523Z" level=info msg="StopPodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" returns successfully" Jan 17 00:21:00.542313 containerd[1508]: time="2026-01-17T00:21:00.542298423Z" level=info msg="RemovePodSandbox for \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\"" Jan 17 00:21:00.542508 containerd[1508]: time="2026-01-17T00:21:00.542362533Z" level=info msg="Forcibly stopping sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\"" Jan 17 00:21:00.586896 containerd[1508]: time="2026-01-17T00:21:00.586792795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:00.589096 containerd[1508]: time="2026-01-17T00:21:00.589002838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:21:00.589243 containerd[1508]: time="2026-01-17T00:21:00.589122888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:21:00.590708 kubelet[2574]: E0117 00:21:00.589392 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:00.590708 kubelet[2574]: E0117 00:21:00.589439 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:00.590708 kubelet[2574]: E0117 00:21:00.589535 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:00.590708 kubelet[2574]: E0117 00:21:00.590671 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.574 [WARNING][5283] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" WorkloadEndpoint="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.575 [INFO][5283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.575 [INFO][5283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" iface="eth0" netns="" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.575 [INFO][5283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.575 [INFO][5283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.598 [INFO][5291] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.598 [INFO][5291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.599 [INFO][5291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.605 [WARNING][5291] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.605 [INFO][5291] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" HandleID="k8s-pod-network.109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-whisker--5dd767c58f--tmjms-eth0" Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.607 [INFO][5291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.616364 containerd[1508]: 2026-01-17 00:21:00.612 [INFO][5283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3" Jan 17 00:21:00.616812 containerd[1508]: time="2026-01-17T00:21:00.616765893Z" level=info msg="TearDown network for sandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" successfully" Jan 17 00:21:00.622720 containerd[1508]: time="2026-01-17T00:21:00.622646419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:00.622789 containerd[1508]: time="2026-01-17T00:21:00.622737809Z" level=info msg="RemovePodSandbox \"109885065276f327598d82a2f14a4220c134264a7b102440ca4117cc0d4564d3\" returns successfully" Jan 17 00:21:00.623329 containerd[1508]: time="2026-01-17T00:21:00.623279419Z" level=info msg="StopPodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\"" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.648 [WARNING][5305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee43eed9-c394-4ae0-a0e3-7818f2df122b", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5", Pod:"calico-apiserver-7b598cf86d-t5pf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e80a0a252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.649 [INFO][5305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.649 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" iface="eth0" netns="" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.649 [INFO][5305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.649 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.665 [INFO][5312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.665 [INFO][5312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.666 [INFO][5312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.670 [WARNING][5312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.670 [INFO][5312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.671 [INFO][5312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.675284 containerd[1508]: 2026-01-17 00:21:00.673 [INFO][5305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.675975 containerd[1508]: time="2026-01-17T00:21:00.675398159Z" level=info msg="TearDown network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" successfully" Jan 17 00:21:00.676044 containerd[1508]: time="2026-01-17T00:21:00.675419919Z" level=info msg="StopPodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" returns successfully" Jan 17 00:21:00.676751 containerd[1508]: time="2026-01-17T00:21:00.676430819Z" level=info msg="RemovePodSandbox for \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\"" Jan 17 00:21:00.676751 containerd[1508]: time="2026-01-17T00:21:00.676457599Z" level=info msg="Forcibly stopping sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\"" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.703 [WARNING][5326] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee43eed9-c394-4ae0-a0e3-7818f2df122b", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"850ecc5e94745951d53bfa8c133dce4f38cf35ae740c049656d75e7e25a9f6d5", Pod:"calico-apiserver-7b598cf86d-t5pf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e80a0a252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.704 [INFO][5326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.704 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" iface="eth0" netns="" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.704 [INFO][5326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.704 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.720 [INFO][5333] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.720 [INFO][5333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.720 [INFO][5333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.724 [WARNING][5333] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.724 [INFO][5333] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" HandleID="k8s-pod-network.ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--t5pf2-eth0" Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.725 [INFO][5333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.729698 containerd[1508]: 2026-01-17 00:21:00.727 [INFO][5326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80" Jan 17 00:21:00.730655 containerd[1508]: time="2026-01-17T00:21:00.730162010Z" level=info msg="TearDown network for sandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" successfully" Jan 17 00:21:00.734718 containerd[1508]: time="2026-01-17T00:21:00.734698084Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:00.734826 containerd[1508]: time="2026-01-17T00:21:00.734795834Z" level=info msg="RemovePodSandbox \"ab380873f7cbd5ed138e077dc441ef5925ddf40fe137bf19a40d0d0cad69ef80\" returns successfully" Jan 17 00:21:00.735223 containerd[1508]: time="2026-01-17T00:21:00.735199214Z" level=info msg="StopPodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\"" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.762 [WARNING][5347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d3748345-d737-4edc-b312-ed0fa45e5e25", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14", Pod:"goldmane-666569f655-fw7xc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97675bebccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.762 [INFO][5347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.762 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" iface="eth0" netns="" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.762 [INFO][5347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.762 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.780 [INFO][5355] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.780 [INFO][5355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.780 [INFO][5355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.785 [WARNING][5355] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.785 [INFO][5355] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.787 [INFO][5355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.790860 containerd[1508]: 2026-01-17 00:21:00.788 [INFO][5347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.790860 containerd[1508]: time="2026-01-17T00:21:00.790841007Z" level=info msg="TearDown network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" successfully" Jan 17 00:21:00.790860 containerd[1508]: time="2026-01-17T00:21:00.790873757Z" level=info msg="StopPodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" returns successfully" Jan 17 00:21:00.791767 containerd[1508]: time="2026-01-17T00:21:00.791503907Z" level=info msg="RemovePodSandbox for \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\"" Jan 17 00:21:00.791767 containerd[1508]: time="2026-01-17T00:21:00.791533177Z" level=info msg="Forcibly stopping sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\"" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.824 [WARNING][5370] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d3748345-d737-4edc-b312-ed0fa45e5e25", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"14ae1ed6f5321589ec12470a89691fae09c3f53e5db69ef353533af164204a14", Pod:"goldmane-666569f655-fw7xc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97675bebccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.824 [INFO][5370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.824 [INFO][5370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" iface="eth0" netns="" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.824 [INFO][5370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.824 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.846 [INFO][5378] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.846 [INFO][5378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.846 [INFO][5378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.854 [WARNING][5378] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.854 [INFO][5378] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" HandleID="k8s-pod-network.1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-goldmane--666569f655--fw7xc-eth0" Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.855 [INFO][5378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.860663 containerd[1508]: 2026-01-17 00:21:00.857 [INFO][5370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04" Jan 17 00:21:00.860663 containerd[1508]: time="2026-01-17T00:21:00.860668702Z" level=info msg="TearDown network for sandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" successfully" Jan 17 00:21:00.865927 containerd[1508]: time="2026-01-17T00:21:00.865888077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:00.865927 containerd[1508]: time="2026-01-17T00:21:00.865931047Z" level=info msg="RemovePodSandbox \"1c1fcda4d359c61f595a35236a4822e0f71ece26c2e64477f6bd8980a0a12e04\" returns successfully" Jan 17 00:21:00.867031 containerd[1508]: time="2026-01-17T00:21:00.866476457Z" level=info msg="StopPodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\"" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.913 [WARNING][5392] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"10c4610a-ed07-4e29-932b-b9ab7749e6ed", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f", Pod:"calico-apiserver-7b598cf86d-jkqzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d2cdf5265d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.913 [INFO][5392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.913 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" iface="eth0" netns="" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.913 [INFO][5392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.913 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.936 [INFO][5399] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.936 [INFO][5399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.936 [INFO][5399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.945 [WARNING][5399] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.945 [INFO][5399] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.946 [INFO][5399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.956666 containerd[1508]: 2026-01-17 00:21:00.949 [INFO][5392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:00.958794 containerd[1508]: time="2026-01-17T00:21:00.958722824Z" level=info msg="TearDown network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" successfully" Jan 17 00:21:00.958794 containerd[1508]: time="2026-01-17T00:21:00.958788704Z" level=info msg="StopPodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" returns successfully" Jan 17 00:21:00.959437 containerd[1508]: time="2026-01-17T00:21:00.959396855Z" level=info msg="RemovePodSandbox for \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\"" Jan 17 00:21:00.959482 containerd[1508]: time="2026-01-17T00:21:00.959443345Z" level=info msg="Forcibly stopping sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\"" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.009 [WARNING][5414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0", GenerateName:"calico-apiserver-7b598cf86d-", Namespace:"calico-apiserver", SelfLink:"", UID:"10c4610a-ed07-4e29-932b-b9ab7749e6ed", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b598cf86d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"410588896b9690377d6960e978d9cd567907ebe7b2b4446aa6d02f447dec190f", Pod:"calico-apiserver-7b598cf86d-jkqzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d2cdf5265d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.010 [INFO][5414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.010 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" iface="eth0" netns="" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.010 [INFO][5414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.010 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.043 [INFO][5422] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.043 [INFO][5422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.043 [INFO][5422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.048 [WARNING][5422] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.048 [INFO][5422] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" HandleID="k8s-pod-network.057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--7b598cf86d--jkqzc-eth0" Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.049 [INFO][5422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:01.055397 containerd[1508]: 2026-01-17 00:21:01.052 [INFO][5414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f" Jan 17 00:21:01.055397 containerd[1508]: time="2026-01-17T00:21:01.055315375Z" level=info msg="TearDown network for sandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" successfully" Jan 17 00:21:01.059921 containerd[1508]: time="2026-01-17T00:21:01.059843350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:01.059921 containerd[1508]: time="2026-01-17T00:21:01.059877400Z" level=info msg="RemovePodSandbox \"057e0e54de60cc302787eb48c5c09f82df2a98d7609adaf22c5a590c883bc72f\" returns successfully" Jan 17 00:21:01.060537 containerd[1508]: time="2026-01-17T00:21:01.060383701Z" level=info msg="StopPodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\"" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.099 [WARNING][5436] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2288281e-fdb7-48d8-b727-f0cc9e2d198b", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219", Pod:"coredns-674b8bbfcf-hv54k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b8e7abe7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.100 [INFO][5436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.100 [INFO][5436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" iface="eth0" netns="" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.100 [INFO][5436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.100 [INFO][5436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.134 [INFO][5443] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.135 [INFO][5443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.135 [INFO][5443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.146 [WARNING][5443] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.146 [INFO][5443] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.148 [INFO][5443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:01.154129 containerd[1508]: 2026-01-17 00:21:01.151 [INFO][5436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.155371 containerd[1508]: time="2026-01-17T00:21:01.154171190Z" level=info msg="TearDown network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" successfully" Jan 17 00:21:01.155371 containerd[1508]: time="2026-01-17T00:21:01.154207710Z" level=info msg="StopPodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" returns successfully" Jan 17 00:21:01.155584 containerd[1508]: time="2026-01-17T00:21:01.155506862Z" level=info msg="RemovePodSandbox for \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\"" Jan 17 00:21:01.155584 containerd[1508]: time="2026-01-17T00:21:01.155556442Z" level=info msg="Forcibly stopping sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\"" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.210 [WARNING][5459] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2288281e-fdb7-48d8-b727-f0cc9e2d198b", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"8facaca5d4d8ad8b175feeb77ee62509d5e070e02478473e3d9ea12744189219", Pod:"coredns-674b8bbfcf-hv54k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b8e7abe7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.210 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.210 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" iface="eth0" netns="" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.211 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.211 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.247 [INFO][5466] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.248 [INFO][5466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.248 [INFO][5466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.257 [WARNING][5466] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.257 [INFO][5466] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" HandleID="k8s-pod-network.aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-coredns--674b8bbfcf--hv54k-eth0" Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.259 [INFO][5466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:01.266214 containerd[1508]: 2026-01-17 00:21:01.262 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c" Jan 17 00:21:01.266214 containerd[1508]: time="2026-01-17T00:21:01.266155548Z" level=info msg="TearDown network for sandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" successfully" Jan 17 00:21:01.272521 containerd[1508]: time="2026-01-17T00:21:01.272170523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:01.272521 containerd[1508]: time="2026-01-17T00:21:01.272244804Z" level=info msg="RemovePodSandbox \"aa30638659275f733d41a7a2df5f9db692bf4a1622f7cf2df48a0d650dd33c4c\" returns successfully" Jan 17 00:21:01.272939 containerd[1508]: time="2026-01-17T00:21:01.272884034Z" level=info msg="StopPodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\"" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.322 [WARNING][5480] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0", GenerateName:"calico-apiserver-79d8d794ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8ec3d55-57ab-493d-b18c-44cba62fcddb", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d8d794ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67", Pod:"calico-apiserver-79d8d794ff-xflgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dda8058fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.322 [INFO][5480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.322 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" iface="eth0" netns="" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.323 [INFO][5480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.323 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.357 [INFO][5487] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.358 [INFO][5487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.358 [INFO][5487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.366 [WARNING][5487] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.366 [INFO][5487] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.368 [INFO][5487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:01.375539 containerd[1508]: 2026-01-17 00:21:01.372 [INFO][5480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.376435 containerd[1508]: time="2026-01-17T00:21:01.376035413Z" level=info msg="TearDown network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" successfully" Jan 17 00:21:01.376435 containerd[1508]: time="2026-01-17T00:21:01.376133903Z" level=info msg="StopPodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" returns successfully" Jan 17 00:21:01.377175 containerd[1508]: time="2026-01-17T00:21:01.376913843Z" level=info msg="RemovePodSandbox for \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\"" Jan 17 00:21:01.377175 containerd[1508]: time="2026-01-17T00:21:01.376988533Z" level=info msg="Forcibly stopping sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\"" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.442 [WARNING][5502] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0", GenerateName:"calico-apiserver-79d8d794ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8ec3d55-57ab-493d-b18c-44cba62fcddb", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d8d794ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8c81c3eeb1", ContainerID:"6d8528cb346fdb678345a4f14ebe68365e3cc4ee6ec7e17ab1b340380e556f67", Pod:"calico-apiserver-79d8d794ff-xflgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dda8058fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.443 [INFO][5502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.443 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" iface="eth0" netns="" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.443 [INFO][5502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.443 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.479 [INFO][5509] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.479 [INFO][5509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.480 [INFO][5509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.489 [WARNING][5509] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.489 [INFO][5509] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" HandleID="k8s-pod-network.a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Workload="ci--4081--3--6--n--8c81c3eeb1-k8s-calico--apiserver--79d8d794ff--xflgs-eth0" Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.491 [INFO][5509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:01.501627 containerd[1508]: 2026-01-17 00:21:01.496 [INFO][5502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171" Jan 17 00:21:01.501627 containerd[1508]: time="2026-01-17T00:21:01.500120971Z" level=info msg="TearDown network for sandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" successfully" Jan 17 00:21:01.507940 containerd[1508]: time="2026-01-17T00:21:01.507842059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:01.508030 containerd[1508]: time="2026-01-17T00:21:01.507960619Z" level=info msg="RemovePodSandbox \"a38a83793013318777783dbc37a814cc7813b3e6fd8369bfe01effb438213171\" returns successfully" Jan 17 00:21:02.666432 containerd[1508]: time="2026-01-17T00:21:02.666365461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:03.110731 containerd[1508]: time="2026-01-17T00:21:03.110530546Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:03.112146 containerd[1508]: time="2026-01-17T00:21:03.112042897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:03.112249 containerd[1508]: time="2026-01-17T00:21:03.112143467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:03.112435 kubelet[2574]: E0117 00:21:03.112348 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:03.112435 kubelet[2574]: E0117 00:21:03.112412 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:03.113152 kubelet[2574]: E0117 00:21:03.112573 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmmfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-t5pf2_calico-apiserver(ee43eed9-c394-4ae0-a0e3-7818f2df122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:03.113957 kubelet[2574]: E0117 00:21:03.113838 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:21:03.667727 containerd[1508]: time="2026-01-17T00:21:03.667639950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:21:04.106880 containerd[1508]: time="2026-01-17T00:21:04.106479217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:04.108633 containerd[1508]: time="2026-01-17T00:21:04.108481319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:21:04.108633 containerd[1508]: time="2026-01-17T00:21:04.108540119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:04.108806 kubelet[2574]: E0117 00:21:04.108739 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:04.108899 kubelet[2574]: E0117 00:21:04.108815 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:04.109153 kubelet[2574]: E0117 00:21:04.109030 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7779db755c-krrrf_calico-system(7b9ac0b2-c7c5-4408-8470-3fecd940db64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:04.110706 kubelet[2574]: E0117 00:21:04.110645 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:21:04.668365 containerd[1508]: time="2026-01-17T00:21:04.668286324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:05.115182 containerd[1508]: time="2026-01-17T00:21:05.114980907Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:05.117506 containerd[1508]: time="2026-01-17T00:21:05.117418110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:05.117717 containerd[1508]: time="2026-01-17T00:21:05.117522500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:05.117874 kubelet[2574]: E0117 00:21:05.117738 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:05.117874 kubelet[2574]: E0117 00:21:05.117814 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:05.119013 kubelet[2574]: E0117 00:21:05.117968 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vcbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-jkqzc_calico-apiserver(10c4610a-ed07-4e29-932b-b9ab7749e6ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:05.119284 kubelet[2574]: E0117 00:21:05.119222 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:21:05.667997 containerd[1508]: time="2026-01-17T00:21:05.667903033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:21:06.108920 containerd[1508]: time="2026-01-17T00:21:06.108730456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:06.111892 containerd[1508]: time="2026-01-17T00:21:06.111085379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:21:06.111892 containerd[1508]: time="2026-01-17T00:21:06.111198819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:06.112085 kubelet[2574]: E0117 00:21:06.111406 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:06.112085 kubelet[2574]: E0117 00:21:06.111467 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:06.112085 kubelet[2574]: E0117 00:21:06.111713 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ck7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw7xc_calico-system(d3748345-d737-4edc-b312-ed0fa45e5e25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:06.113861 kubelet[2574]: E0117 00:21:06.113804 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:21:09.671254 kubelet[2574]: E0117 00:21:09.671150 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:21:13.668620 kubelet[2574]: E0117 00:21:13.668442 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:21:13.972635 systemd[1]: run-containerd-runc-k8s.io-24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f-runc.JqasdY.mount: Deactivated successfully. Jan 17 00:21:14.668035 kubelet[2574]: E0117 00:21:14.667943 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:21:16.668247 kubelet[2574]: E0117 00:21:16.668053 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:21:17.666626 kubelet[2574]: E0117 00:21:17.666545 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:21:18.668770 kubelet[2574]: E0117 00:21:18.668004 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:21:20.668732 containerd[1508]: time="2026-01-17T00:21:20.668674843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:21:20.670794 kubelet[2574]: E0117 00:21:20.669241 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:21:21.096345 containerd[1508]: time="2026-01-17T00:21:21.095704199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:21.097411 containerd[1508]: time="2026-01-17T00:21:21.097341675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:21:21.097411 containerd[1508]: time="2026-01-17T00:21:21.097396083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:21:21.097706 kubelet[2574]: E0117 00:21:21.097584 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:21.097706 kubelet[2574]: E0117 00:21:21.097630 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:21.097827 kubelet[2574]: E0117 00:21:21.097725 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12f2f0817a3b40168af76823b3573c15,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:21.100108 containerd[1508]: time="2026-01-17T00:21:21.099746336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:21:21.541386 containerd[1508]: time="2026-01-17T00:21:21.541316179Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:21.542969 containerd[1508]: time="2026-01-17T00:21:21.542910177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:21:21.543103 containerd[1508]: time="2026-01-17T00:21:21.542938016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:21.543375 kubelet[2574]: E0117 00:21:21.543275 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:21.543375 kubelet[2574]: E0117 00:21:21.543332 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:21.544139 kubelet[2574]: E0117 00:21:21.543448 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:21.545071 kubelet[2574]: E0117 00:21:21.544964 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:21:27.669691 containerd[1508]: time="2026-01-17T00:21:27.669298172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:21:28.103628 containerd[1508]: time="2026-01-17T00:21:28.102672847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:28.105147 containerd[1508]: time="2026-01-17T00:21:28.104973115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:21:28.105147 containerd[1508]: time="2026-01-17T00:21:28.105079212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:21:28.105582 kubelet[2574]: E0117 00:21:28.105470 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:28.106189 kubelet[2574]: E0117 00:21:28.105635 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:28.106189 kubelet[2574]: E0117 00:21:28.105891 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:28.110410 containerd[1508]: time="2026-01-17T00:21:28.110037117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:21:28.526745 containerd[1508]: time="2026-01-17T00:21:28.526461086Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:28.528102 containerd[1508]: time="2026-01-17T00:21:28.527941436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:21:28.528102 containerd[1508]: time="2026-01-17T00:21:28.528037374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:21:28.528824 kubelet[2574]: E0117 00:21:28.528413 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:28.528824 kubelet[2574]: E0117 00:21:28.528476 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:28.528824 kubelet[2574]: E0117 00:21:28.528670 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:28.530769 kubelet[2574]: E0117 00:21:28.530695 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:21:28.669279 containerd[1508]: time="2026-01-17T00:21:28.668871208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:29.091810 containerd[1508]: time="2026-01-17T00:21:29.091740411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:29.093945 containerd[1508]: time="2026-01-17T00:21:29.093840885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:29.094118 containerd[1508]: time="2026-01-17T00:21:29.093981241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:29.094655 kubelet[2574]: E0117 00:21:29.094277 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:29.094655 kubelet[2574]: E0117 00:21:29.094332 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:29.094655 kubelet[2574]: E0117 00:21:29.094472 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79d8d794ff-xflgs_calico-apiserver(e8ec3d55-57ab-493d-b18c-44cba62fcddb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:29.095993 kubelet[2574]: E0117 00:21:29.095960 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:21:29.672553 containerd[1508]: time="2026-01-17T00:21:29.671725539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:30.096333 containerd[1508]: time="2026-01-17T00:21:30.096185484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:30.099808 containerd[1508]: time="2026-01-17T00:21:30.099151328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:30.099808 containerd[1508]: time="2026-01-17T00:21:30.099281384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:30.099906 kubelet[2574]: E0117 00:21:30.099698 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:30.099906 kubelet[2574]: E0117 00:21:30.099763 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:30.100166 kubelet[2574]: E0117 00:21:30.099990 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vcbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-jkqzc_calico-apiserver(10c4610a-ed07-4e29-932b-b9ab7749e6ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:30.104675 kubelet[2574]: E0117 00:21:30.103705 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:21:31.670671 containerd[1508]: time="2026-01-17T00:21:31.670396326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:21:32.126040 containerd[1508]: time="2026-01-17T00:21:32.125935021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:32.130191 containerd[1508]: time="2026-01-17T00:21:32.129635902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:21:32.130191 containerd[1508]: time="2026-01-17T00:21:32.129694910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:32.130296 kubelet[2574]: E0117 00:21:32.129803 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:32.130296 kubelet[2574]: E0117 00:21:32.129844 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:32.130296 kubelet[2574]: E0117 00:21:32.129942 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7779db755c-krrrf_calico-system(7b9ac0b2-c7c5-4408-8470-3fecd940db64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:32.131320 kubelet[2574]: E0117 00:21:32.131283 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:21:32.667378 containerd[1508]: time="2026-01-17T00:21:32.667234614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:32.668761 kubelet[2574]: E0117 00:21:32.668325 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:21:33.095992 containerd[1508]: time="2026-01-17T00:21:33.095714523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:33.098272 containerd[1508]: time="2026-01-17T00:21:33.097922811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:33.098272 containerd[1508]: time="2026-01-17T00:21:33.098038148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:33.100637 kubelet[2574]: E0117 00:21:33.098518 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:33.100637 kubelet[2574]: E0117 00:21:33.098583 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:33.100637 kubelet[2574]: E0117 00:21:33.098760 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmmfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-t5pf2_calico-apiserver(ee43eed9-c394-4ae0-a0e3-7818f2df122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:33.100637 kubelet[2574]: E0117 00:21:33.100273 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:21:33.670656 containerd[1508]: time="2026-01-17T00:21:33.668774779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:21:34.091777 containerd[1508]: time="2026-01-17T00:21:34.091475471Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:34.094427 containerd[1508]: time="2026-01-17T00:21:34.094153409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:21:34.094427 containerd[1508]: time="2026-01-17T00:21:34.094311936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:34.096391 kubelet[2574]: E0117 00:21:34.094793 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:34.096391 kubelet[2574]: E0117 00:21:34.094866 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:34.096391 kubelet[2574]: E0117 00:21:34.095102 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ck7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw7xc_calico-system(d3748345-d737-4edc-b312-ed0fa45e5e25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:34.098690 kubelet[2574]: E0117 00:21:34.097746 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:21:39.672955 kubelet[2574]: E0117 00:21:39.672799 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:21:40.667711 kubelet[2574]: E0117 00:21:40.667276 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:21:43.667388 kubelet[2574]: E0117 00:21:43.667330 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:21:45.668037 kubelet[2574]: E0117 00:21:45.667774 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:21:45.670861 kubelet[2574]: E0117 00:21:45.670790 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:21:46.666973 kubelet[2574]: E0117 00:21:46.666925 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:21:48.668723 kubelet[2574]: E0117 00:21:48.667918 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:21:50.670085 kubelet[2574]: E0117 00:21:50.669984 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:21:54.666675 kubelet[2574]: E0117 00:21:54.665909 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:21:55.656744 systemd[1]: Started sshd@7-157.180.82.149:22-20.161.92.111:37700.service - OpenSSH per-connection server daemon (20.161.92.111:37700). Jan 17 00:21:55.670238 kubelet[2574]: E0117 00:21:55.669837 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:21:56.443322 sshd[5586]: Accepted publickey for core from 20.161.92.111 port 37700 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:21:56.447261 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:56.457290 systemd-logind[1487]: New session 8 of user core. Jan 17 00:21:56.460805 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:21:56.666989 kubelet[2574]: E0117 00:21:56.666896 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:21:57.116954 sshd[5586]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:57.128237 systemd[1]: sshd@7-157.180.82.149:22-20.161.92.111:37700.service: Deactivated successfully. Jan 17 00:21:57.128572 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:21:57.137986 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:21:57.145667 systemd-logind[1487]: Removed session 8. Jan 17 00:21:59.669304 kubelet[2574]: E0117 00:21:59.669191 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:22:00.669914 kubelet[2574]: E0117 00:22:00.669839 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:22:02.257014 systemd[1]: Started sshd@8-157.180.82.149:22-20.161.92.111:37716.service - OpenSSH per-connection server daemon (20.161.92.111:37716). Jan 17 00:22:02.668757 kubelet[2574]: E0117 00:22:02.667682 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:22:03.043659 sshd[5607]: Accepted publickey for core from 20.161.92.111 port 37716 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:03.046361 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:03.051512 systemd-logind[1487]: New session 9 of user core. Jan 17 00:22:03.056719 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:22:03.672238 sshd[5607]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:03.683807 systemd[1]: sshd@8-157.180.82.149:22-20.161.92.111:37716.service: Deactivated successfully. Jan 17 00:22:03.689574 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:22:03.692103 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:22:03.694955 systemd-logind[1487]: Removed session 9. Jan 17 00:22:05.674280 kubelet[2574]: E0117 00:22:05.674188 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:22:07.666278 kubelet[2574]: E0117 00:22:07.666238 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:22:08.666842 kubelet[2574]: E0117 00:22:08.666741 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:22:08.816740 systemd[1]: Started sshd@9-157.180.82.149:22-20.161.92.111:52112.service - OpenSSH per-connection server daemon (20.161.92.111:52112). Jan 17 00:22:09.587458 sshd[5631]: Accepted publickey for core from 20.161.92.111 port 52112 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:09.591855 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:09.603234 systemd-logind[1487]: New session 10 of user core. Jan 17 00:22:09.611152 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:22:10.218774 sshd[5631]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:10.227990 systemd[1]: sshd@9-157.180.82.149:22-20.161.92.111:52112.service: Deactivated successfully. Jan 17 00:22:10.232161 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:22:10.233543 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:22:10.236075 systemd-logind[1487]: Removed session 10. Jan 17 00:22:10.364967 systemd[1]: Started sshd@10-157.180.82.149:22-20.161.92.111:52116.service - OpenSSH per-connection server daemon (20.161.92.111:52116). Jan 17 00:22:11.141200 sshd[5647]: Accepted publickey for core from 20.161.92.111 port 52116 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:11.144845 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:11.159753 systemd-logind[1487]: New session 11 of user core. Jan 17 00:22:11.163975 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:22:11.670052 kubelet[2574]: E0117 00:22:11.669860 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:22:11.675329 kubelet[2574]: E0117 00:22:11.670882 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:22:11.857876 sshd[5647]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:11.864682 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:22:11.866511 systemd[1]: sshd@10-157.180.82.149:22-20.161.92.111:52116.service: Deactivated successfully. Jan 17 00:22:11.872541 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:22:11.878969 systemd-logind[1487]: Removed session 11. Jan 17 00:22:11.996697 systemd[1]: Started sshd@11-157.180.82.149:22-20.161.92.111:52128.service - OpenSSH per-connection server daemon (20.161.92.111:52128). Jan 17 00:22:12.776223 sshd[5662]: Accepted publickey for core from 20.161.92.111 port 52128 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:12.779939 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:12.792848 systemd-logind[1487]: New session 12 of user core. Jan 17 00:22:12.801981 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:22:13.427742 sshd[5662]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:13.432785 systemd[1]: sshd@11-157.180.82.149:22-20.161.92.111:52128.service: Deactivated successfully. Jan 17 00:22:13.437101 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:22:13.440050 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:22:13.442535 systemd-logind[1487]: Removed session 12. Jan 17 00:22:14.671830 containerd[1508]: time="2026-01-17T00:22:14.671336345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:22:15.117188 containerd[1508]: time="2026-01-17T00:22:15.116936417Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:15.118773 containerd[1508]: time="2026-01-17T00:22:15.118702002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:22:15.118893 containerd[1508]: time="2026-01-17T00:22:15.118748591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:22:15.119158 kubelet[2574]: E0117 00:22:15.119082 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:22:15.119852 kubelet[2574]: E0117 00:22:15.119166 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:22:15.119852 kubelet[2574]: E0117 00:22:15.119305 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12f2f0817a3b40168af76823b3573c15,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:15.122631 containerd[1508]: time="2026-01-17T00:22:15.122051602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:22:15.545979 containerd[1508]: time="2026-01-17T00:22:15.545829102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:15.547257 containerd[1508]: time="2026-01-17T00:22:15.547215259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:22:15.547396 containerd[1508]: time="2026-01-17T00:22:15.547341549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:22:15.547560 kubelet[2574]: E0117 00:22:15.547513 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:22:15.547988 kubelet[2574]: E0117 00:22:15.547577 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:22:15.548487 kubelet[2574]: E0117 00:22:15.548429 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vt445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df997d949-g829z_calico-system(e0d4c934-d914-4aab-9515-da3ebc2d4bad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:15.567762 kubelet[2574]: E0117 00:22:15.567689 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:22:15.668668 containerd[1508]: time="2026-01-17T00:22:15.668584010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:22:16.109385 containerd[1508]: time="2026-01-17T00:22:16.109142501Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:16.111295 containerd[1508]: time="2026-01-17T00:22:16.111058344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:22:16.111515 containerd[1508]: time="2026-01-17T00:22:16.111183023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:22:16.112101 kubelet[2574]: E0117 00:22:16.111812 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:22:16.112101 kubelet[2574]: E0117 00:22:16.111926 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:22:16.112628 kubelet[2574]: E0117 00:22:16.112435 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ck7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw7xc_calico-system(d3748345-d737-4edc-b312-ed0fa45e5e25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:16.115081 kubelet[2574]: E0117 00:22:16.115005 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:22:18.561734 systemd[1]: Started sshd@12-157.180.82.149:22-20.161.92.111:43898.service - OpenSSH per-connection server daemon (20.161.92.111:43898). Jan 17 00:22:18.667136 containerd[1508]: time="2026-01-17T00:22:18.667100927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:22:19.106269 containerd[1508]: time="2026-01-17T00:22:19.106176492Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:19.107849 containerd[1508]: time="2026-01-17T00:22:19.107756099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:22:19.108814 containerd[1508]: time="2026-01-17T00:22:19.107849869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:22:19.109267 kubelet[2574]: E0117 00:22:19.109102 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:22:19.109267 kubelet[2574]: E0117 00:22:19.109223 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:22:19.127328 kubelet[2574]: E0117 00:22:19.127227 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:19.130167 containerd[1508]: time="2026-01-17T00:22:19.130089709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:22:19.318652 sshd[5696]: Accepted publickey for core from 20.161.92.111 port 43898 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:19.320798 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:19.332561 systemd-logind[1487]: New session 13 of user core. Jan 17 00:22:19.341839 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:22:19.547847 containerd[1508]: time="2026-01-17T00:22:19.547786928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:19.549497 containerd[1508]: time="2026-01-17T00:22:19.549452854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:22:19.549674 containerd[1508]: time="2026-01-17T00:22:19.549544033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:22:19.549849 kubelet[2574]: E0117 00:22:19.549763 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:22:19.549849 kubelet[2574]: E0117 00:22:19.549830 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:22:19.550068 kubelet[2574]: E0117 00:22:19.549979 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkfrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2d8j7_calico-system(669c9dd2-93ed-4be5-8b4c-834706d32358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:19.551540 kubelet[2574]: E0117 00:22:19.551456 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:22:19.982896 sshd[5696]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:19.989352 systemd[1]: sshd@12-157.180.82.149:22-20.161.92.111:43898.service: Deactivated successfully. Jan 17 00:22:19.993306 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:22:19.996294 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:22:19.997465 systemd-logind[1487]: Removed session 13. Jan 17 00:22:20.134727 systemd[1]: Started sshd@13-157.180.82.149:22-20.161.92.111:43912.service - OpenSSH per-connection server daemon (20.161.92.111:43912). Jan 17 00:22:20.666238 containerd[1508]: time="2026-01-17T00:22:20.666116174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:22:20.910537 sshd[5709]: Accepted publickey for core from 20.161.92.111 port 43912 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:20.915501 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:20.928113 systemd-logind[1487]: New session 14 of user core. Jan 17 00:22:20.934840 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:22:21.093872 containerd[1508]: time="2026-01-17T00:22:21.093807918Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:21.095499 containerd[1508]: time="2026-01-17T00:22:21.095455485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:22:21.096913 containerd[1508]: time="2026-01-17T00:22:21.095547955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:22:21.096983 kubelet[2574]: E0117 00:22:21.095761 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:22:21.096983 kubelet[2574]: E0117 00:22:21.095849 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:22:21.096983 kubelet[2574]: E0117 00:22:21.096186 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vcbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-jkqzc_calico-apiserver(10c4610a-ed07-4e29-932b-b9ab7749e6ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:21.098281 kubelet[2574]: E0117 00:22:21.098155 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:22:21.737753 sshd[5709]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:21.740796 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:22:21.742547 systemd[1]: sshd@13-157.180.82.149:22-20.161.92.111:43912.service: Deactivated successfully. Jan 17 00:22:21.745210 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:22:21.746574 systemd-logind[1487]: Removed session 14. Jan 17 00:22:21.876152 systemd[1]: Started sshd@14-157.180.82.149:22-20.161.92.111:43924.service - OpenSSH per-connection server daemon (20.161.92.111:43924). Jan 17 00:22:22.663143 sshd[5720]: Accepted publickey for core from 20.161.92.111 port 43924 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:22.664987 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:22.672845 containerd[1508]: time="2026-01-17T00:22:22.672086143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:22:22.679208 systemd-logind[1487]: New session 15 of user core. Jan 17 00:22:22.684830 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:22:23.108758 containerd[1508]: time="2026-01-17T00:22:23.108367172Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:23.110430 containerd[1508]: time="2026-01-17T00:22:23.109830991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:22:23.110430 containerd[1508]: time="2026-01-17T00:22:23.109935040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:22:23.111661 kubelet[2574]: E0117 00:22:23.110706 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:22:23.111661 kubelet[2574]: E0117 00:22:23.110764 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:22:23.114586 containerd[1508]: time="2026-01-17T00:22:23.112497202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:22:23.130414 kubelet[2574]: E0117 00:22:23.130337 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79d8d794ff-xflgs_calico-apiserver(e8ec3d55-57ab-493d-b18c-44cba62fcddb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:23.137538 kubelet[2574]: E0117 00:22:23.137500 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:22:23.547746 containerd[1508]: time="2026-01-17T00:22:23.547587914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:23.551310 containerd[1508]: time="2026-01-17T00:22:23.551207227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:22:23.551310 containerd[1508]: time="2026-01-17T00:22:23.551273286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:22:23.551960 kubelet[2574]: E0117 00:22:23.551515 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:22:23.551960 kubelet[2574]: E0117 00:22:23.551577 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:22:23.551960 kubelet[2574]: E0117 00:22:23.551711 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmmfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b598cf86d-t5pf2_calico-apiserver(ee43eed9-c394-4ae0-a0e3-7818f2df122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:23.553247 kubelet[2574]: E0117 00:22:23.553224 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:22:23.673374 containerd[1508]: time="2026-01-17T00:22:23.673274988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:22:23.772888 sshd[5720]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:23.776498 systemd[1]: sshd@14-157.180.82.149:22-20.161.92.111:43924.service: Deactivated successfully. Jan 17 00:22:23.776759 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:22:23.779692 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:22:23.783881 systemd-logind[1487]: Removed session 15. Jan 17 00:22:23.907801 systemd[1]: Started sshd@15-157.180.82.149:22-20.161.92.111:48552.service - OpenSSH per-connection server daemon (20.161.92.111:48552). Jan 17 00:22:24.095256 containerd[1508]: time="2026-01-17T00:22:24.095181842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:22:24.096764 containerd[1508]: time="2026-01-17T00:22:24.096668311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:22:24.096890 containerd[1508]: time="2026-01-17T00:22:24.096762180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:22:24.097162 kubelet[2574]: E0117 00:22:24.097071 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:22:24.097162 kubelet[2574]: E0117 00:22:24.097164 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:22:24.097998 kubelet[2574]: E0117 00:22:24.097887 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2qj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7779db755c-krrrf_calico-system(7b9ac0b2-c7c5-4408-8470-3fecd940db64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:22:24.099663 kubelet[2574]: E0117 00:22:24.099518 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:22:24.673388 sshd[5740]: Accepted publickey for core from 20.161.92.111 port 48552 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:24.677021 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:24.686388 systemd-logind[1487]: New session 16 of user core. Jan 17 00:22:24.696830 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:22:25.367911 sshd[5740]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:25.371373 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:22:25.372454 systemd[1]: sshd@15-157.180.82.149:22-20.161.92.111:48552.service: Deactivated successfully. Jan 17 00:22:25.375683 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:22:25.378072 systemd-logind[1487]: Removed session 16. Jan 17 00:22:25.513076 systemd[1]: Started sshd@16-157.180.82.149:22-20.161.92.111:48558.service - OpenSSH per-connection server daemon (20.161.92.111:48558). Jan 17 00:22:26.276998 sshd[5773]: Accepted publickey for core from 20.161.92.111 port 48558 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:26.281713 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:26.292502 systemd-logind[1487]: New session 17 of user core. Jan 17 00:22:26.301515 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:22:26.907986 sshd[5773]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:26.911485 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:22:26.913000 systemd[1]: sshd@16-157.180.82.149:22-20.161.92.111:48558.service: Deactivated successfully. Jan 17 00:22:26.915415 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:22:26.916703 systemd-logind[1487]: Removed session 17. Jan 17 00:22:27.671389 kubelet[2574]: E0117 00:22:27.671240 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:22:30.666773 kubelet[2574]: E0117 00:22:30.666100 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:22:31.670176 kubelet[2574]: E0117 00:22:31.670083 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:22:31.673789 kubelet[2574]: E0117 00:22:31.673724 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:22:32.047695 systemd[1]: Started sshd@17-157.180.82.149:22-20.161.92.111:48572.service - OpenSSH per-connection server daemon (20.161.92.111:48572). Jan 17 00:22:32.816867 sshd[5788]: Accepted publickey for core from 20.161.92.111 port 48572 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:32.817875 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:32.827154 systemd-logind[1487]: New session 18 of user core. Jan 17 00:22:32.830707 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:22:33.432507 sshd[5788]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:33.440513 systemd[1]: sshd@17-157.180.82.149:22-20.161.92.111:48572.service: Deactivated successfully. Jan 17 00:22:33.441386 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:22:33.447236 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:22:33.454743 systemd-logind[1487]: Removed session 18. Jan 17 00:22:34.666547 kubelet[2574]: E0117 00:22:34.666322 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:22:36.669765 kubelet[2574]: E0117 00:22:36.669500 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:22:36.672804 kubelet[2574]: E0117 00:22:36.670120 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:22:38.567703 systemd[1]: Started sshd@18-157.180.82.149:22-20.161.92.111:53764.service - OpenSSH per-connection server daemon (20.161.92.111:53764). Jan 17 00:22:39.346393 sshd[5803]: Accepted publickey for core from 20.161.92.111 port 53764 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:22:39.353141 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:39.368909 systemd-logind[1487]: New session 19 of user core. Jan 17 00:22:39.376555 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:22:39.943189 sshd[5803]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:39.949721 systemd[1]: sshd@18-157.180.82.149:22-20.161.92.111:53764.service: Deactivated successfully. Jan 17 00:22:39.956342 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:22:39.958341 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:22:39.963791 systemd-logind[1487]: Removed session 19. Jan 17 00:22:41.669236 kubelet[2574]: E0117 00:22:41.667697 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:22:42.670994 kubelet[2574]: E0117 00:22:42.670840 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:22:43.985132 systemd[1]: run-containerd-runc-k8s.io-24f980bce066142fcb7d9732a5dbb51db5574cccc96ee5e1bd138154ccbada0f-runc.b8kTZj.mount: Deactivated successfully. Jan 17 00:22:45.669318 kubelet[2574]: E0117 00:22:45.669204 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:22:45.672283 kubelet[2574]: E0117 00:22:45.672227 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:22:49.668314 kubelet[2574]: E0117 00:22:49.668136 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:22:51.669980 kubelet[2574]: E0117 00:22:51.669915 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:22:51.670821 kubelet[2574]: E0117 00:22:51.670019 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79d8d794ff-xflgs" podUID="e8ec3d55-57ab-493d-b18c-44cba62fcddb" Jan 17 00:22:52.666497 kubelet[2574]: E0117 00:22:52.666422 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw7xc" podUID="d3748345-d737-4edc-b312-ed0fa45e5e25" Jan 17 00:22:56.194893 systemd[1]: cri-containerd-4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc.scope: Deactivated successfully. Jan 17 00:22:56.195351 systemd[1]: cri-containerd-4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc.scope: Consumed 21.395s CPU time. Jan 17 00:22:56.233380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc-rootfs.mount: Deactivated successfully. Jan 17 00:22:56.240456 containerd[1508]: time="2026-01-17T00:22:56.240146765Z" level=info msg="shim disconnected" id=4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc namespace=k8s.io Jan 17 00:22:56.240456 containerd[1508]: time="2026-01-17T00:22:56.240229124Z" level=warning msg="cleaning up after shim disconnected" id=4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc namespace=k8s.io Jan 17 00:22:56.240456 containerd[1508]: time="2026-01-17T00:22:56.240244804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:56.354736 kubelet[2574]: I0117 00:22:56.354683 2574 scope.go:117] "RemoveContainer" containerID="4c0579a3433570adcce39125a28ee233de9b2053111147253478183081aeebbc" Jan 17 00:22:56.357480 containerd[1508]: time="2026-01-17T00:22:56.357415037Z" level=info msg="CreateContainer within sandbox \"d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:22:56.375667 containerd[1508]: time="2026-01-17T00:22:56.375568709Z" level=info msg="CreateContainer within sandbox \"d3a734847b84c37af66a59af29314619d22e591aa6895d22b7d0c34054ee597f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4a7833364bd6d0e684ccf8498c609f70c27410ef643e98d1a2ce35c989633978\"" Jan 17 00:22:56.378381 containerd[1508]: time="2026-01-17T00:22:56.378330696Z" level=info msg="StartContainer for \"4a7833364bd6d0e684ccf8498c609f70c27410ef643e98d1a2ce35c989633978\"" Jan 17 00:22:56.435846 systemd[1]: Started cri-containerd-4a7833364bd6d0e684ccf8498c609f70c27410ef643e98d1a2ce35c989633978.scope - libcontainer container 4a7833364bd6d0e684ccf8498c609f70c27410ef643e98d1a2ce35c989633978. Jan 17 00:22:56.493757 containerd[1508]: time="2026-01-17T00:22:56.493488599Z" level=info msg="StartContainer for \"4a7833364bd6d0e684ccf8498c609f70c27410ef643e98d1a2ce35c989633978\" returns successfully" Jan 17 00:22:56.616024 kubelet[2574]: E0117 00:22:56.615963 2574 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41060->10.0.0.2:2379: read: connection timed out" Jan 17 00:22:57.670048 kubelet[2574]: E0117 00:22:57.669975 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df997d949-g829z" podUID="e0d4c934-d914-4aab-9515-da3ebc2d4bad" Jan 17 00:22:57.767057 systemd[1]: cri-containerd-700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9.scope: Deactivated successfully. Jan 17 00:22:57.767299 systemd[1]: cri-containerd-700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9.scope: Consumed 4.094s CPU time, 16.9M memory peak, 0B memory swap peak. Jan 17 00:22:57.800425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9-rootfs.mount: Deactivated successfully. Jan 17 00:22:57.808563 containerd[1508]: time="2026-01-17T00:22:57.808177382Z" level=info msg="shim disconnected" id=700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9 namespace=k8s.io Jan 17 00:22:57.808563 containerd[1508]: time="2026-01-17T00:22:57.808294192Z" level=warning msg="cleaning up after shim disconnected" id=700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9 namespace=k8s.io Jan 17 00:22:57.808563 containerd[1508]: time="2026-01-17T00:22:57.808355001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:58.364875 kubelet[2574]: I0117 00:22:58.364821 2574 scope.go:117] "RemoveContainer" containerID="700c15ad25ac40766f78d5dc73a671eb86a47080ea1ab2fde2bce2700e5ad1b9" Jan 17 00:22:58.368368 containerd[1508]: time="2026-01-17T00:22:58.368289382Z" level=info msg="CreateContainer within sandbox \"4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:22:58.395956 containerd[1508]: time="2026-01-17T00:22:58.395896852Z" level=info msg="CreateContainer within sandbox \"4e80c512233b3870e1fa1cf35c2067e8e966b884bf5dcae720bffb65d3975cb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cfbc5a6992cab381bd863feb05e859d19bd3fde8b53dccbcbbffa68ad1a404b9\"" Jan 17 00:22:58.398030 containerd[1508]: time="2026-01-17T00:22:58.396470778Z" level=info msg="StartContainer for \"cfbc5a6992cab381bd863feb05e859d19bd3fde8b53dccbcbbffa68ad1a404b9\"" Jan 17 00:22:58.458881 systemd[1]: Started cri-containerd-cfbc5a6992cab381bd863feb05e859d19bd3fde8b53dccbcbbffa68ad1a404b9.scope - libcontainer container cfbc5a6992cab381bd863feb05e859d19bd3fde8b53dccbcbbffa68ad1a404b9. Jan 17 00:22:58.538913 containerd[1508]: time="2026-01-17T00:22:58.538831242Z" level=info msg="StartContainer for \"cfbc5a6992cab381bd863feb05e859d19bd3fde8b53dccbcbbffa68ad1a404b9\" returns successfully" Jan 17 00:22:58.668255 kubelet[2574]: E0117 00:22:58.668116 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2d8j7" podUID="669c9dd2-93ed-4be5-8b4c-834706d32358" Jan 17 00:22:59.668295 kubelet[2574]: E0117 00:22:59.668199 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-jkqzc" podUID="10c4610a-ed07-4e29-932b-b9ab7749e6ed" Jan 17 00:23:00.134472 kubelet[2574]: I0117 00:23:00.133501 2574 status_manager.go:895] "Failed to get status for pod" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40990->10.0.0.2:2379: read: connection timed out" Jan 17 00:23:00.134472 kubelet[2574]: E0117 00:23:00.133502 2574 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40894->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-kube-controllers-7779db755c-krrrf.188b5cbe0befb385 calico-system 1663 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-7779db755c-krrrf,UID:7b9ac0b2-c7c5-4408-8470-3fecd940db64,APIVersion:v1,ResourceVersion:815,FieldPath:spec.containers{calico-kube-controllers},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8c81c3eeb1,},FirstTimestamp:2026-01-17 00:20:49 +0000 UTC,LastTimestamp:2026-01-17 00:22:49.668039392 +0000 UTC m=+170.144985671,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8c81c3eeb1,}" Jan 17 00:23:00.666646 kubelet[2574]: E0117 00:23:00.666534 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7779db755c-krrrf" podUID="7b9ac0b2-c7c5-4408-8470-3fecd940db64" Jan 17 00:23:02.419583 systemd[1]: cri-containerd-9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b.scope: Deactivated successfully. Jan 17 00:23:02.420753 systemd[1]: cri-containerd-9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b.scope: Consumed 2.404s CPU time, 13.8M memory peak, 0B memory swap peak. Jan 17 00:23:02.473087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b-rootfs.mount: Deactivated successfully. Jan 17 00:23:02.474970 containerd[1508]: time="2026-01-17T00:23:02.473871526Z" level=info msg="shim disconnected" id=9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b namespace=k8s.io Jan 17 00:23:02.474970 containerd[1508]: time="2026-01-17T00:23:02.473962895Z" level=warning msg="cleaning up after shim disconnected" id=9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b namespace=k8s.io Jan 17 00:23:02.474970 containerd[1508]: time="2026-01-17T00:23:02.473978815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:02.666318 kubelet[2574]: E0117 00:23:02.666266 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b598cf86d-t5pf2" podUID="ee43eed9-c394-4ae0-a0e3-7818f2df122b" Jan 17 00:23:03.381911 kubelet[2574]: I0117 00:23:03.381852 2574 scope.go:117] "RemoveContainer" containerID="9e73337eac2ae21f90956855c8c0df6c8884c3d26f3d081c33c3aa47bcf9697b" Jan 17 00:23:03.384890 containerd[1508]: time="2026-01-17T00:23:03.384559982Z" level=info msg="CreateContainer within sandbox \"98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:23:03.408069 containerd[1508]: time="2026-01-17T00:23:03.407983725Z" level=info msg="CreateContainer within sandbox \"98b77b9d488e15c1a9e69865beda388c6d70f7fd71d3f2bccc9e702acc217654\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1851fc164da086666031fe4634a0727269a1a7851ab62e1b9852569fb3096daa\"" Jan 17 00:23:03.409694 containerd[1508]: time="2026-01-17T00:23:03.408785281Z" level=info msg="StartContainer for \"1851fc164da086666031fe4634a0727269a1a7851ab62e1b9852569fb3096daa\"" Jan 17 00:23:03.456835 systemd[1]: Started cri-containerd-1851fc164da086666031fe4634a0727269a1a7851ab62e1b9852569fb3096daa.scope - libcontainer container 1851fc164da086666031fe4634a0727269a1a7851ab62e1b9852569fb3096daa. Jan 17 00:23:03.547827 containerd[1508]: time="2026-01-17T00:23:03.547773559Z" level=info msg="StartContainer for \"1851fc164da086666031fe4634a0727269a1a7851ab62e1b9852569fb3096daa\" returns successfully"