Jan 24 00:34:39.176911 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:34:39.176927 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:34:39.176949 kernel: BIOS-provided physical RAM map: Jan 24 00:34:39.176954 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:34:39.176958 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 24 00:34:39.176962 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 24 00:34:39.176967 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 24 00:34:39.176972 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 24 00:34:39.176976 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 24 00:34:39.176980 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 24 00:34:39.176985 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:34:39.176992 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:34:39.176996 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 24 00:34:39.177001 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 24 00:34:39.177006 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:34:39.177011 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:34:39.177017 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:34:39.177022 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 24 00:34:39.177027 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:34:39.177031 kernel: NX (Execute Disable) protection: active Jan 24 00:34:39.177036 kernel: APIC: Static calls initialized Jan 24 00:34:39.177040 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 24 00:34:39.177045 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e00c198 Jan 24 00:34:39.177050 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:34:39.177054 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:34:39.177059 kernel: SMBIOS 3.0.0 present. Jan 24 00:34:39.177064 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 24 00:34:39.177069 kernel: Hypervisor detected: KVM Jan 24 00:34:39.177076 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:34:39.177080 kernel: kvm-clock: using sched offset of 12295428572 cycles Jan 24 00:34:39.177085 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:34:39.177090 kernel: tsc: Detected 2399.996 MHz processor Jan 24 00:34:39.177095 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:34:39.177100 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:34:39.177104 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 24 00:34:39.177109 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:34:39.177114 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:34:39.177121 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 24 00:34:39.177125 kernel: Using GB pages for direct mapping Jan 24 00:34:39.177130 kernel: Secure boot disabled Jan 24 00:34:39.177138 kernel: ACPI: Early table checksum verification disabled Jan 24 00:34:39.177143 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 24 00:34:39.177148 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:34:39.177153 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:39.177160 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:39.177165 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 24 00:34:39.177170 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:39.177175 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:39.177180 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:39.177185 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:39.177190 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:34:39.177197 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 24 00:34:39.177202 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 24 00:34:39.177207 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 24 00:34:39.177212 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 24 00:34:39.177217 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 24 00:34:39.177221 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 24 00:34:39.177226 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 24 00:34:39.177231 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 24 00:34:39.177236 kernel: No NUMA configuration found Jan 24 00:34:39.177243 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 24 00:34:39.177248 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Jan 24 00:34:39.177253 kernel: Zone ranges: Jan 24 00:34:39.177258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:34:39.177263 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:34:39.177268 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:34:39.177273 kernel: Movable zone start for each node Jan 24 00:34:39.177278 kernel: Early memory node ranges Jan 24 00:34:39.177283 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:34:39.177288 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 24 00:34:39.177295 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 24 00:34:39.177300 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 24 00:34:39.177305 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:34:39.177310 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 24 00:34:39.177315 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:34:39.177320 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:34:39.177325 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:34:39.177330 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:34:39.177335 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 24 00:34:39.177342 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 24 00:34:39.177347 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:34:39.177352 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:34:39.177357 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:34:39.177362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:34:39.177367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:34:39.177371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:34:39.177376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:34:39.177381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:34:39.177388 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:34:39.177393 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:34:39.177398 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:34:39.177403 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:34:39.177408 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 24 00:34:39.177413 kernel: Booting paravirtualized kernel on KVM Jan 24 00:34:39.177418 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:34:39.177423 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:34:39.177428 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:34:39.177435 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:34:39.177440 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:34:39.177445 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 24 00:34:39.177451 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:34:39.177456 kernel: random: crng init done Jan 24 00:34:39.177461 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:34:39.177466 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:34:39.177471 kernel: Fallback order for Node 0: 0 Jan 24 00:34:39.177476 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 24 00:34:39.177483 kernel: Policy zone: Normal Jan 24 00:34:39.177488 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:34:39.177493 kernel: software IO TLB: area num 2. Jan 24 00:34:39.177504 kernel: Memory: 3827836K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 263128K reserved, 0K cma-reserved) Jan 24 00:34:39.177509 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:34:39.177514 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:34:39.177519 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:34:39.177524 kernel: Dynamic Preempt: voluntary Jan 24 00:34:39.177531 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:34:39.177537 kernel: rcu: RCU event tracing is enabled. Jan 24 00:34:39.177542 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:34:39.177547 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:34:39.177559 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:34:39.177566 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:34:39.177571 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:34:39.177576 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:34:39.177581 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:34:39.177587 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:34:39.177592 kernel: Console: colour dummy device 80x25 Jan 24 00:34:39.177597 kernel: printk: console [tty0] enabled Jan 24 00:34:39.177605 kernel: printk: console [ttyS0] enabled Jan 24 00:34:39.177610 kernel: ACPI: Core revision 20230628 Jan 24 00:34:39.177616 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:34:39.177621 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:34:39.177626 kernel: x2apic enabled Jan 24 00:34:39.177631 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:34:39.177639 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:34:39.177644 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:34:39.177649 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399996) Jan 24 00:34:39.177655 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:34:39.177660 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:34:39.177665 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:34:39.177670 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:34:39.177675 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 24 00:34:39.177683 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:34:39.177688 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:34:39.177693 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:34:39.177698 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 24 00:34:39.177703 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:34:39.177709 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:34:39.177714 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:34:39.177719 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:34:39.177724 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:34:39.177731 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:34:39.177737 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:34:39.177742 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:34:39.177747 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:34:39.177752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:34:39.177757 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:34:39.177762 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:34:39.177768 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:34:39.177773 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 24 00:34:39.177780 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 24 00:34:39.177785 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:34:39.177791 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:34:39.177796 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:34:39.177801 kernel: landlock: Up and running. Jan 24 00:34:39.177806 kernel: SELinux: Initializing. Jan 24 00:34:39.177811 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:34:39.177816 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:34:39.177822 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 24 00:34:39.177829 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:34:39.177834 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:34:39.177840 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:34:39.177845 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:34:39.177850 kernel: ... version: 0 Jan 24 00:34:39.177855 kernel: ... bit width: 48 Jan 24 00:34:39.177860 kernel: ... generic registers: 6 Jan 24 00:34:39.177865 kernel: ... value mask: 0000ffffffffffff Jan 24 00:34:39.177870 kernel: ... max period: 00007fffffffffff Jan 24 00:34:39.177878 kernel: ... fixed-purpose events: 0 Jan 24 00:34:39.177883 kernel: ... event mask: 000000000000003f Jan 24 00:34:39.177888 kernel: signal: max sigframe size: 3376 Jan 24 00:34:39.177893 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:34:39.177898 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:34:39.177903 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:34:39.177909 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:34:39.177914 kernel: .... node #0, CPUs: #1 Jan 24 00:34:39.177919 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:34:39.177926 kernel: smpboot: Max logical packages: 1 Jan 24 00:34:39.177931 kernel: smpboot: Total of 2 processors activated (9599.98 BogoMIPS) Jan 24 00:34:39.177996 kernel: devtmpfs: initialized Jan 24 00:34:39.178001 kernel: x86/mm: Memory block size: 128MB Jan 24 00:34:39.178007 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 24 00:34:39.178012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:34:39.178018 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:34:39.178023 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:34:39.178028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:34:39.178036 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:34:39.178041 kernel: audit: type=2000 audit(1769214877.440:1): state=initialized audit_enabled=0 res=1 Jan 24 00:34:39.178046 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:34:39.178051 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:34:39.178057 kernel: cpuidle: using governor menu Jan 24 00:34:39.178062 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:34:39.178067 kernel: dca service started, version 1.12.1 Jan 24 00:34:39.178072 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 24 00:34:39.178077 kernel: PCI: Using configuration type 1 for base access Jan 24 00:34:39.178085 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:34:39.178090 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:34:39.178095 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:34:39.178101 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:34:39.178106 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:34:39.178111 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:34:39.178116 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:34:39.178121 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:34:39.178126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:34:39.178134 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:34:39.178139 kernel: ACPI: Interpreter enabled Jan 24 00:34:39.178144 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:34:39.178149 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:34:39.178154 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:34:39.178159 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:34:39.178165 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:34:39.178170 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:34:39.178318 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:34:39.178424 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:34:39.178527 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:34:39.178534 kernel: PCI host bridge to bus 0000:00 Jan 24 00:34:39.178633 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:34:39.178721 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:34:39.178808 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:34:39.178898 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 24 00:34:39.179001 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:34:39.179088 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:34:39.179187 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:34:39.179297 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:34:39.179403 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 24 00:34:39.179507 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 24 00:34:39.179608 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 24 00:34:39.179704 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 24 00:34:39.179800 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:34:39.179895 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:34:39.180013 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:34:39.180121 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.180220 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 24 00:34:39.180321 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.180417 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 24 00:34:39.180525 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.180621 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 24 00:34:39.180722 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.180820 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 24 00:34:39.180920 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.181047 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 24 00:34:39.181149 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.181246 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 24 00:34:39.181347 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.181442 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 24 00:34:39.181555 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.181652 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 24 00:34:39.181756 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:34:39.181852 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 24 00:34:39.181995 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:34:39.182093 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:34:39.182196 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:34:39.182292 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 24 00:34:39.182385 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 24 00:34:39.182485 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:34:39.182586 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 24 00:34:39.182693 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:34:39.182795 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 24 00:34:39.182894 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 24 00:34:39.183014 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:34:39.183110 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:34:39.183217 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:34:39.183313 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:34:39.183419 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 00:34:39.183533 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 24 00:34:39.183628 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:34:39.183722 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:34:39.183827 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 24 00:34:39.183926 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 24 00:34:39.184059 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 24 00:34:39.184155 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:34:39.184252 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:34:39.184345 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:34:39.184449 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 24 00:34:39.184557 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 24 00:34:39.184651 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:34:39.184745 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:34:39.184850 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 00:34:39.184963 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 24 00:34:39.185064 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 24 00:34:39.185158 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:34:39.185253 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:34:39.185347 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:34:39.185457 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 24 00:34:39.185565 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 24 00:34:39.185669 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 24 00:34:39.185764 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:34:39.185858 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:34:39.185975 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:34:39.185982 kernel: acpiphp: Slot [0] registered Jan 24 00:34:39.186089 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:34:39.186188 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 24 00:34:39.186287 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 24 00:34:39.186389 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:34:39.186484 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:34:39.186587 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:34:39.186681 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:34:39.186687 kernel: acpiphp: Slot [0-2] registered Jan 24 00:34:39.186781 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:34:39.186874 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:34:39.186986 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:34:39.186996 kernel: acpiphp: Slot [0-3] registered Jan 24 00:34:39.187091 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:34:39.187185 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:34:39.187279 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:34:39.187285 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:34:39.187290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:34:39.187295 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:34:39.187301 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:34:39.187308 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:34:39.187314 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:34:39.187319 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:34:39.187324 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:34:39.187329 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:34:39.187334 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:34:39.187339 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:34:39.187344 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:34:39.187350 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:34:39.187357 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:34:39.187362 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:34:39.187367 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:34:39.187373 kernel: iommu: Default domain type: Translated Jan 24 00:34:39.187378 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:34:39.187383 kernel: efivars: Registered efivars operations Jan 24 00:34:39.187388 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:34:39.187393 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:34:39.187399 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 24 00:34:39.187407 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 24 00:34:39.187412 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 24 00:34:39.187417 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 24 00:34:39.187519 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:34:39.187613 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:34:39.187708 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:34:39.187714 kernel: vgaarb: loaded Jan 24 00:34:39.187719 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:34:39.187725 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:34:39.187732 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:34:39.187738 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:34:39.187743 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:34:39.187748 kernel: pnp: PnP ACPI init Jan 24 00:34:39.187850 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:34:39.187857 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:34:39.187862 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:34:39.187868 kernel: NET: Registered PF_INET protocol family Jan 24 00:34:39.187889 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:34:39.187896 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:34:39.187902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:34:39.187907 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:34:39.187913 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:34:39.187918 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:34:39.187924 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:34:39.187929 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:34:39.189953 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:34:39.189967 kernel: NET: Registered PF_XDP protocol family Jan 24 00:34:39.190090 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:34:39.190195 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:34:39.190293 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 00:34:39.190388 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 00:34:39.190483 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 00:34:39.190587 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 00:34:39.190685 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 00:34:39.190782 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 00:34:39.190881 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 24 00:34:39.190989 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:34:39.191088 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:34:39.191183 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:34:39.191277 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:34:39.191372 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:34:39.191467 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:34:39.191570 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:34:39.191666 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:34:39.191759 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:34:39.191854 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:34:39.193990 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:34:39.194103 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:34:39.194203 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:34:39.194299 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:34:39.194393 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:34:39.194490 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:34:39.194619 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 24 00:34:39.194715 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:34:39.194814 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 24 00:34:39.194915 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:34:39.195043 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:34:39.195139 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:34:39.195243 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 24 00:34:39.195338 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:34:39.195432 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:34:39.195539 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:34:39.195635 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 24 00:34:39.195733 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:34:39.195827 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:34:39.195917 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:34:39.196015 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:34:39.196107 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:34:39.196194 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 24 00:34:39.196281 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:34:39.196368 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:34:39.196466 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 24 00:34:39.196566 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:34:39.196665 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 24 00:34:39.196767 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 24 00:34:39.196859 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:34:39.198982 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:34:39.199094 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 24 00:34:39.199189 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:34:39.199286 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 24 00:34:39.199382 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:34:39.199479 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 24 00:34:39.199580 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 24 00:34:39.199671 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:34:39.199769 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 24 00:34:39.199861 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 24 00:34:39.199966 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:34:39.200067 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 24 00:34:39.200158 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 24 00:34:39.200251 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:34:39.200258 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:34:39.200264 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:34:39.200270 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:34:39.200275 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 24 00:34:39.200281 kernel: Initialise system trusted keyrings Jan 24 00:34:39.200289 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:34:39.200294 kernel: Key type asymmetric registered Jan 24 00:34:39.200300 kernel: Asymmetric key parser 'x509' registered Jan 24 00:34:39.200305 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:34:39.200311 kernel: io scheduler mq-deadline registered Jan 24 00:34:39.200316 kernel: io scheduler kyber registered Jan 24 00:34:39.200321 kernel: io scheduler bfq registered Jan 24 00:34:39.202800 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 00:34:39.202908 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 00:34:39.203022 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 00:34:39.203119 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 00:34:39.203216 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 00:34:39.203312 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 00:34:39.203407 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 00:34:39.203511 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 00:34:39.203607 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 00:34:39.203702 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 00:34:39.203801 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 00:34:39.203896 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 00:34:39.204001 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 00:34:39.204096 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 00:34:39.204191 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 00:34:39.204286 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 00:34:39.204293 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:34:39.204387 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 24 00:34:39.204485 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 24 00:34:39.204492 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:34:39.204504 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 24 00:34:39.204510 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:34:39.204515 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:34:39.204521 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:34:39.204527 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:34:39.204532 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:34:39.204540 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:34:39.204642 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:34:39.204733 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:34:39.204823 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:34:38 UTC (1769214878) Jan 24 00:34:39.204913 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:34:39.204920 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:34:39.204925 kernel: efifb: probing for efifb Jan 24 00:34:39.204931 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 24 00:34:39.206764 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:34:39.206774 kernel: efifb: scrolling: redraw Jan 24 00:34:39.206780 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:34:39.206786 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:34:39.206791 kernel: fb0: EFI VGA frame buffer device Jan 24 00:34:39.206797 kernel: pstore: Using crash dump compression: deflate Jan 24 00:34:39.206802 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:34:39.206808 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:34:39.206813 kernel: Segment Routing with IPv6 Jan 24 00:34:39.206819 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:34:39.206827 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:34:39.206832 kernel: Key type dns_resolver registered Jan 24 00:34:39.206838 kernel: IPI shorthand broadcast: enabled Jan 24 00:34:39.206843 kernel: sched_clock: Marking stable (1504010613, 190012575)->(1732480467, -38457279) Jan 24 00:34:39.206849 kernel: registered taskstats version 1 Jan 24 00:34:39.206854 kernel: Loading compiled-in X.509 certificates Jan 24 00:34:39.206860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:34:39.206865 kernel: Key type .fscrypt registered Jan 24 00:34:39.206871 kernel: Key type fscrypt-provisioning registered Jan 24 00:34:39.206879 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:34:39.206884 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:34:39.206890 kernel: ima: No architecture policies found Jan 24 00:34:39.206895 kernel: clk: Disabling unused clocks Jan 24 00:34:39.206901 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:34:39.206906 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:34:39.206912 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:34:39.206917 kernel: Run /init as init process Jan 24 00:34:39.206922 kernel: with arguments: Jan 24 00:34:39.206931 kernel: /init Jan 24 00:34:39.206944 kernel: with environment: Jan 24 00:34:39.206949 kernel: HOME=/ Jan 24 00:34:39.206955 kernel: TERM=linux Jan 24 00:34:39.206962 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:34:39.206970 systemd[1]: Detected virtualization kvm. Jan 24 00:34:39.206976 systemd[1]: Detected architecture x86-64. Jan 24 00:34:39.206984 systemd[1]: Running in initrd. Jan 24 00:34:39.206990 systemd[1]: No hostname configured, using default hostname. Jan 24 00:34:39.206995 systemd[1]: Hostname set to . Jan 24 00:34:39.207001 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:34:39.207007 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:34:39.207013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:34:39.207018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:34:39.207025 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:34:39.207033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:34:39.207039 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:34:39.207045 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:34:39.207052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:34:39.207058 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:34:39.207063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:34:39.207069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:34:39.207078 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:34:39.207086 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:34:39.207091 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:34:39.207097 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:34:39.207103 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:34:39.207109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:34:39.207114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:34:39.207120 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:34:39.207128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:34:39.207134 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:34:39.207140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:34:39.207145 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:34:39.207151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:34:39.207157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:34:39.207162 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:34:39.207168 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:34:39.207174 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:34:39.207182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:34:39.207188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:39.207212 systemd-journald[188]: Collecting audit messages is disabled. Jan 24 00:34:39.207225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:34:39.207234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:34:39.207240 systemd-journald[188]: Journal started Jan 24 00:34:39.207254 systemd-journald[188]: Runtime Journal (/run/log/journal/0c0f7738b7af480b82462b67b27d86f2) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:34:39.208872 systemd-modules-load[189]: Inserted module 'overlay' Jan 24 00:34:39.218956 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:34:39.226619 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:34:39.231356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:39.238746 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:34:39.238780 kernel: Bridge firewalling registered Jan 24 00:34:39.235151 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 24 00:34:39.235868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:34:39.251179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:34:39.256180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:34:39.259240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:34:39.265755 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:34:39.267989 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:34:39.270370 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:34:39.282100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:34:39.289040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:34:39.291203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:34:39.293227 dracut-cmdline[213]: dracut-dracut-053 Jan 24 00:34:39.294311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:34:39.296014 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:34:39.304620 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:34:39.315463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:34:39.345708 systemd-resolved[241]: Positive Trust Anchors: Jan 24 00:34:39.345721 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:34:39.345742 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:34:39.350254 systemd-resolved[241]: Defaulting to hostname 'linux'. Jan 24 00:34:39.351119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:34:39.351590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:34:39.370991 kernel: SCSI subsystem initialized Jan 24 00:34:39.378955 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:34:39.399992 kernel: iscsi: registered transport (tcp) Jan 24 00:34:39.417202 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:34:39.417268 kernel: QLogic iSCSI HBA Driver Jan 24 00:34:39.472177 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:34:39.478144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:34:39.519217 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:34:39.519296 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:34:39.522273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:34:39.579994 kernel: raid6: avx512x4 gen() 24395 MB/s Jan 24 00:34:39.597982 kernel: raid6: avx512x2 gen() 33200 MB/s Jan 24 00:34:39.615981 kernel: raid6: avx512x1 gen() 43391 MB/s Jan 24 00:34:39.633981 kernel: raid6: avx2x4 gen() 48162 MB/s Jan 24 00:34:39.651977 kernel: raid6: avx2x2 gen() 49888 MB/s Jan 24 00:34:39.670751 kernel: raid6: avx2x1 gen() 39300 MB/s Jan 24 00:34:39.670800 kernel: raid6: using algorithm avx2x2 gen() 49888 MB/s Jan 24 00:34:39.689790 kernel: raid6: .... xor() 37430 MB/s, rmw enabled Jan 24 00:34:39.689836 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:34:39.705979 kernel: xor: automatically using best checksumming function avx Jan 24 00:34:39.802972 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:34:39.818382 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:34:39.826164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:34:39.836688 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 24 00:34:39.840529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:34:39.849179 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:34:39.863065 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 24 00:34:39.890589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:34:39.897089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:34:39.983062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:34:39.994176 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:34:40.022278 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:34:40.025581 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:34:40.027766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:34:40.029055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:34:40.036162 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:34:40.060975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:34:40.074948 kernel: libata version 3.00 loaded. Jan 24 00:34:40.081229 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:34:40.081402 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:34:40.086161 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:34:40.086312 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:34:40.100215 kernel: scsi host0: ahci Jan 24 00:34:40.107242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:34:40.107768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:34:40.108907 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:34:40.109840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:40.113009 kernel: scsi host1: ahci Jan 24 00:34:40.110521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:40.113863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:40.116975 kernel: scsi host2: ahci Jan 24 00:34:40.117118 kernel: ACPI: bus type USB registered Jan 24 00:34:40.119574 kernel: usbcore: registered new interface driver usbfs Jan 24 00:34:40.119593 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:34:40.119601 kernel: usbcore: registered new interface driver hub Jan 24 00:34:40.122001 kernel: usbcore: registered new device driver usb Jan 24 00:34:40.125197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:40.127009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:40.127458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:40.129222 kernel: scsi host3: ahci Jan 24 00:34:40.139264 kernel: scsi host4: ahci Jan 24 00:34:40.139325 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:34:40.140981 kernel: AES CTR mode by8 optimization enabled Jan 24 00:34:40.145353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:40.152947 kernel: scsi host5: ahci Jan 24 00:34:40.163736 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 37 Jan 24 00:34:40.163761 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 37 Jan 24 00:34:40.163776 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 37 Jan 24 00:34:40.163784 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 37 Jan 24 00:34:40.163791 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 37 Jan 24 00:34:40.163798 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 37 Jan 24 00:34:40.167769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:40.180862 kernel: scsi host6: Virtio SCSI HBA Jan 24 00:34:40.180527 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:34:40.184963 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:34:40.194964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:34:40.477070 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:40.482992 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:34:40.483042 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:40.491985 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:40.492029 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:40.500984 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:40.501051 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:34:40.505385 kernel: ata1.00: applying bridge limits Jan 24 00:34:40.512007 kernel: ata1.00: configured for UDMA/100 Jan 24 00:34:40.518081 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:34:40.562994 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:34:40.563357 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 24 00:34:40.576992 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 00:34:40.587444 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:34:40.587814 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 24 00:34:40.588145 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 24 00:34:40.591811 kernel: sd 6:0:0:0: Power-on or device reset occurred Jan 24 00:34:40.598312 kernel: hub 1-0:1.0: USB hub found Jan 24 00:34:40.598594 kernel: sd 6:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 24 00:34:40.598841 kernel: hub 1-0:1.0: 4 ports detected Jan 24 00:34:40.599111 kernel: sd 6:0:0:0: [sda] Write Protect is off Jan 24 00:34:40.599359 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:34:40.599606 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 00:34:40.604783 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:34:40.605126 kernel: hub 2-0:1.0: USB hub found Jan 24 00:34:40.605374 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:34:40.605390 kernel: hub 2-0:1.0: 4 ports detected Jan 24 00:34:40.609566 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:34:40.625290 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:34:40.625313 kernel: GPT:17805311 != 160006143 Jan 24 00:34:40.625322 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:34:40.628972 kernel: GPT:17805311 != 160006143 Jan 24 00:34:40.631125 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:34:40.636133 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:34:40.637976 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Jan 24 00:34:40.641340 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:34:40.700009 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (468) Jan 24 00:34:40.710983 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (457) Jan 24 00:34:40.725704 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:34:40.733759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:34:40.738163 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:34:40.742626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:34:40.743305 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:34:40.749052 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:34:40.757841 disk-uuid[584]: Primary Header is updated. Jan 24 00:34:40.757841 disk-uuid[584]: Secondary Entries is updated. Jan 24 00:34:40.757841 disk-uuid[584]: Secondary Header is updated. Jan 24 00:34:40.845054 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 00:34:40.993988 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:34:41.005384 kernel: usbcore: registered new interface driver usbhid Jan 24 00:34:41.005456 kernel: usbhid: USB HID core driver Jan 24 00:34:41.021026 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 00:34:41.021079 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 24 00:34:41.774052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:34:41.775175 disk-uuid[585]: The operation has completed successfully. Jan 24 00:34:41.857138 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:34:41.857272 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:34:41.875110 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:34:41.882812 sh[595]: Success Jan 24 00:34:41.899222 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:34:41.965826 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:34:41.975042 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:34:41.976561 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:34:42.004092 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:34:42.004146 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:42.008755 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:34:42.008795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:34:42.011356 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:34:42.021962 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:34:42.023595 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:34:42.025490 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:34:42.031082 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:34:42.033252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:34:42.046957 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:34:42.046998 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:42.051329 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:34:42.059353 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:34:42.059386 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:34:42.069551 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:34:42.071879 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:34:42.078604 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:34:42.084088 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:34:42.148777 ignition[689]: Ignition 2.19.0 Jan 24 00:34:42.148789 ignition[689]: Stage: fetch-offline Jan 24 00:34:42.148836 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:42.148845 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:42.148917 ignition[689]: parsed url from cmdline: "" Jan 24 00:34:42.148921 ignition[689]: no config URL provided Jan 24 00:34:42.148926 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:34:42.148944 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:34:42.151815 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:34:42.148957 ignition[689]: failed to fetch config: resource requires networking Jan 24 00:34:42.149216 ignition[689]: Ignition finished successfully Jan 24 00:34:42.155216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:34:42.161099 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:34:42.178646 systemd-networkd[781]: lo: Link UP Jan 24 00:34:42.178652 systemd-networkd[781]: lo: Gained carrier Jan 24 00:34:42.181203 systemd-networkd[781]: Enumeration completed Jan 24 00:34:42.181513 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:34:42.182127 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:42.182131 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:34:42.183012 systemd[1]: Reached target network.target - Network. Jan 24 00:34:42.183849 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:42.183853 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:34:42.184439 systemd-networkd[781]: eth0: Link UP Jan 24 00:34:42.184444 systemd-networkd[781]: eth0: Gained carrier Jan 24 00:34:42.184450 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:42.186273 systemd-networkd[781]: eth1: Link UP Jan 24 00:34:42.186277 systemd-networkd[781]: eth1: Gained carrier Jan 24 00:34:42.186283 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:42.188037 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:34:42.199828 ignition[785]: Ignition 2.19.0 Jan 24 00:34:42.200392 ignition[785]: Stage: fetch Jan 24 00:34:42.201298 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:42.201311 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:42.201372 ignition[785]: parsed url from cmdline: "" Jan 24 00:34:42.201376 ignition[785]: no config URL provided Jan 24 00:34:42.201380 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:34:42.201388 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:34:42.201401 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 24 00:34:42.201520 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:34:42.224998 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:34:42.241978 systemd-networkd[781]: eth0: DHCPv4 address 65.21.184.255/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:34:42.401771 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 24 00:34:42.408686 ignition[785]: GET result: OK Jan 24 00:34:42.408809 ignition[785]: parsing config with SHA512: 1d44edbae97fe5137eae82747272df6fea9200fb7453690c209263e15a244d63eb8c9f5771bd5d1b49b724aeb15573695a87681e97997170b51796affdb52b28 Jan 24 00:34:42.414412 unknown[785]: fetched base config from "system" Jan 24 00:34:42.415055 ignition[785]: fetch: fetch complete Jan 24 00:34:42.414431 unknown[785]: fetched base config from "system" Jan 24 00:34:42.415067 ignition[785]: fetch: fetch passed Jan 24 00:34:42.414443 unknown[785]: fetched user config from "hetzner" Jan 24 00:34:42.415152 ignition[785]: Ignition finished successfully Jan 24 00:34:42.420929 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:34:42.431194 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:34:42.475263 ignition[792]: Ignition 2.19.0 Jan 24 00:34:42.475283 ignition[792]: Stage: kargs Jan 24 00:34:42.475579 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:42.475609 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:42.479820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:34:42.477020 ignition[792]: kargs: kargs passed Jan 24 00:34:42.477113 ignition[792]: Ignition finished successfully Jan 24 00:34:42.489215 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:34:42.513435 ignition[799]: Ignition 2.19.0 Jan 24 00:34:42.513455 ignition[799]: Stage: disks Jan 24 00:34:42.513730 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:42.513753 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:42.515031 ignition[799]: disks: disks passed Jan 24 00:34:42.518413 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:34:42.515124 ignition[799]: Ignition finished successfully Jan 24 00:34:42.520349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:34:42.521310 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:34:42.522891 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:34:42.524434 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:34:42.526026 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:34:42.534169 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:34:42.577066 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:34:42.582399 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:34:42.590126 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:34:42.710647 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:34:42.710790 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:34:42.711600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:34:42.722008 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:34:42.725013 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:34:42.727621 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:34:42.730307 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:34:42.730678 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:34:42.745465 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (816) Jan 24 00:34:42.745495 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:34:42.749531 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:42.749560 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:34:42.750433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:34:42.752272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:34:42.760335 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:34:42.760363 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:34:42.771989 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:34:42.804218 coreos-metadata[818]: Jan 24 00:34:42.804 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 24 00:34:42.805475 coreos-metadata[818]: Jan 24 00:34:42.805 INFO Fetch successful Jan 24 00:34:42.807307 coreos-metadata[818]: Jan 24 00:34:42.805 INFO wrote hostname ci-4081-3-6-n-56b1d28098 to /sysroot/etc/hostname Jan 24 00:34:42.808173 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:34:42.826901 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:34:42.832147 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:34:42.837798 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:34:42.845229 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:34:43.014191 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:34:43.028113 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:34:43.032208 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:34:43.046884 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:34:43.053202 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:34:43.095452 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:34:43.098618 ignition[933]: INFO : Ignition 2.19.0 Jan 24 00:34:43.098618 ignition[933]: INFO : Stage: mount Jan 24 00:34:43.100925 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:43.100925 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:43.102352 ignition[933]: INFO : mount: mount passed Jan 24 00:34:43.102352 ignition[933]: INFO : Ignition finished successfully Jan 24 00:34:43.103838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:34:43.110074 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:34:43.140187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:34:43.158023 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jan 24 00:34:43.165002 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:34:43.165049 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:43.173772 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:34:43.181482 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:34:43.181530 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:34:43.189692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:34:43.224442 ignition[960]: INFO : Ignition 2.19.0 Jan 24 00:34:43.225737 ignition[960]: INFO : Stage: files Jan 24 00:34:43.226826 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:43.227680 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:43.229890 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:34:43.231082 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:34:43.231082 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:34:43.235912 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:34:43.236864 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:34:43.237729 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:34:43.236983 unknown[960]: wrote ssh authorized keys file for user: core Jan 24 00:34:43.240702 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:34:43.242233 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:34:43.410118 systemd-networkd[781]: eth0: Gained IPv6LL Jan 24 00:34:43.487537 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:34:43.793019 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:34:43.802765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:34:43.802765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:34:43.802765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:34:43.802765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:34:43.802765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:34:43.802765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:34:43.922276 systemd-networkd[781]: eth1: Gained IPv6LL Jan 24 00:34:44.144983 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:34:44.482982 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:34:44.482982 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:34:44.485974 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:34:44.501169 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:34:44.501169 ignition[960]: INFO : files: files passed Jan 24 00:34:44.501169 ignition[960]: INFO : Ignition finished successfully Jan 24 00:34:44.489652 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:34:44.501501 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:34:44.506037 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:34:44.507313 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:34:44.507506 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:34:44.520057 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:34:44.520709 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:34:44.520709 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:34:44.522601 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:34:44.523438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:34:44.529078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:34:44.557697 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:34:44.557920 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:34:44.560612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:34:44.561803 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:34:44.563728 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:34:44.573140 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:34:44.596114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:34:44.609298 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:34:44.621649 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:34:44.622540 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:34:44.623402 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:34:44.624302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:34:44.624448 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:34:44.626209 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:34:44.627515 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:34:44.628797 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:34:44.630051 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:34:44.631268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:34:44.632514 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:34:44.633771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:34:44.635058 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:34:44.636278 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:34:44.637527 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:34:44.638762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:34:44.638902 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:34:44.640590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:34:44.641876 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:34:44.643056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:34:44.643197 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:34:44.644313 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:34:44.644450 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:34:44.646163 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:34:44.646313 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:34:44.647485 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:34:44.647624 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:34:44.648742 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:34:44.648874 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:34:44.658175 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:34:44.659255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:34:44.659414 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:34:44.663209 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:34:44.663968 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:34:44.664172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:34:44.665211 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:34:44.666172 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:34:44.677220 ignition[1014]: INFO : Ignition 2.19.0 Jan 24 00:34:44.681010 ignition[1014]: INFO : Stage: umount Jan 24 00:34:44.681010 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:44.681010 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:34:44.681010 ignition[1014]: INFO : umount: umount passed Jan 24 00:34:44.681010 ignition[1014]: INFO : Ignition finished successfully Jan 24 00:34:44.680015 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:34:44.680194 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:34:44.681499 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:34:44.681677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:34:44.685745 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:34:44.685819 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:34:44.686694 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:34:44.686765 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:34:44.687891 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:34:44.687992 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:34:44.690479 systemd[1]: Stopped target network.target - Network. Jan 24 00:34:44.692594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:34:44.692652 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:34:44.695030 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:34:44.696156 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:34:44.699995 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:34:44.702782 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:34:44.704011 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:34:44.705156 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:34:44.705210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:34:44.707129 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:34:44.707219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:34:44.708275 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:34:44.708365 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:34:44.709450 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:34:44.709534 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:34:44.711149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:34:44.712628 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:34:44.714966 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 24 00:34:44.715899 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:34:44.717065 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:34:44.717257 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:34:44.718978 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 24 00:34:44.719123 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:34:44.719250 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:34:44.722793 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:34:44.722893 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:34:44.725210 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:34:44.725323 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:34:44.727056 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:34:44.727110 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:34:44.733034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:34:44.734030 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:34:44.734074 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:34:44.736011 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:34:44.736052 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:34:44.737137 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:34:44.737175 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:34:44.738239 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:34:44.738275 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:34:44.739452 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:34:44.751190 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:34:44.751339 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:34:44.752058 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:34:44.752114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:34:44.752476 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:34:44.752505 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:34:44.752837 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:34:44.752871 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:34:44.754121 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:34:44.754160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:34:44.756053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:34:44.756092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:34:44.757988 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:34:44.758317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:34:44.758358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:34:44.758723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:44.758756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:44.759363 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:34:44.759465 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:34:44.779855 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:34:44.779959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:34:44.781377 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:34:44.787116 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:34:44.793594 systemd[1]: Switching root. Jan 24 00:34:44.825340 systemd-journald[188]: Journal stopped Jan 24 00:34:45.912107 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 24 00:34:45.912181 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:34:45.912192 kernel: SELinux: policy capability open_perms=1 Jan 24 00:34:45.912201 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:34:45.912209 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:34:45.912221 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:34:45.912231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:34:45.912239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:34:45.912247 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:34:45.912255 kernel: audit: type=1403 audit(1769214885.003:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:34:45.912269 systemd[1]: Successfully loaded SELinux policy in 43.329ms. Jan 24 00:34:45.912292 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.236ms. Jan 24 00:34:45.912301 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:34:45.912310 systemd[1]: Detected virtualization kvm. Jan 24 00:34:45.912324 systemd[1]: Detected architecture x86-64. Jan 24 00:34:45.912337 systemd[1]: Detected first boot. Jan 24 00:34:45.912345 systemd[1]: Hostname set to . Jan 24 00:34:45.912355 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:34:45.914522 zram_generator::config[1056]: No configuration found. Jan 24 00:34:45.914541 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:34:45.914551 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:34:45.914565 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:34:45.914603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:34:45.914622 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:34:45.914635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:34:45.914644 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:34:45.914652 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:34:45.914665 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:34:45.914674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:34:45.914682 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:34:45.914691 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:34:45.914699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:34:45.914708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:34:45.914717 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:34:45.914727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:34:45.914738 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:34:45.914746 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:34:45.914755 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:34:45.914764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:34:45.914772 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:34:45.914781 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:34:45.914790 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:34:45.914801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:34:45.914809 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:34:45.914818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:34:45.914828 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:34:45.914837 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:34:45.914846 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:34:45.914855 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:34:45.914864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:34:45.914872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:34:45.914884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:34:45.914892 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:34:45.914901 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:34:45.914910 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:34:45.914918 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:34:45.914927 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:45.915368 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:34:45.915380 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:34:45.915389 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:34:45.915401 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:34:45.915409 systemd[1]: Reached target machines.target - Containers. Jan 24 00:34:45.915418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:34:45.915427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:45.915435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:34:45.915459 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:34:45.915468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:34:45.915477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:34:45.915488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:34:45.915497 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:34:45.915505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:34:45.915514 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:34:45.915523 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:34:45.915532 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:34:45.915540 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:34:45.915549 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:34:45.915563 kernel: fuse: init (API version 7.39) Jan 24 00:34:45.915585 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:34:45.915611 kernel: loop: module loaded Jan 24 00:34:45.915624 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:34:45.915634 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:34:45.915643 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:34:45.915652 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:34:45.915661 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:34:45.915689 systemd-journald[1139]: Collecting audit messages is disabled. Jan 24 00:34:45.915717 systemd[1]: Stopped verity-setup.service. Jan 24 00:34:45.915726 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:45.915735 kernel: ACPI: bus type drm_connector registered Jan 24 00:34:45.915744 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:34:45.915753 systemd-journald[1139]: Journal started Jan 24 00:34:45.915771 systemd-journald[1139]: Runtime Journal (/run/log/journal/0c0f7738b7af480b82462b67b27d86f2) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:34:45.573533 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:34:45.605601 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:34:45.606145 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:34:45.917988 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:34:45.920090 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:34:45.920626 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:34:45.921177 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:34:45.921743 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:34:45.922293 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:34:45.922984 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:34:45.923643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:34:45.924326 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:34:45.924511 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:34:45.925359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:34:45.925548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:34:45.926245 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:34:45.926425 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:34:45.927176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:34:45.927358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:34:45.928043 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:34:45.928219 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:34:45.928880 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:34:45.929209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:34:45.929854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:34:45.930490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:34:45.931259 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:34:45.940503 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:34:45.947063 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:34:45.951026 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:34:45.951391 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:34:45.951411 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:34:45.952460 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:34:45.959362 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:34:45.964113 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:34:45.964560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:45.971356 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:34:45.979073 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:34:45.979442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:34:45.980680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:34:45.981387 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:34:45.983071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:34:45.986283 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:34:45.988076 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:34:45.990417 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:34:45.999504 systemd-journald[1139]: Time spent on flushing to /var/log/journal/0c0f7738b7af480b82462b67b27d86f2 is 76.311ms for 1174 entries. Jan 24 00:34:45.999504 systemd-journald[1139]: System Journal (/var/log/journal/0c0f7738b7af480b82462b67b27d86f2) is 8.0M, max 584.8M, 576.8M free. Jan 24 00:34:46.122064 systemd-journald[1139]: Received client request to flush runtime journal. Jan 24 00:34:46.122115 kernel: loop0: detected capacity change from 0 to 8 Jan 24 00:34:46.122143 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:34:46.122158 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:34:45.992130 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:34:45.992689 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:34:46.014371 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:34:46.015079 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:34:46.023113 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:34:46.079124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:34:46.095769 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:34:46.105331 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:34:46.126298 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:34:46.134300 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:34:46.136815 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:34:46.138561 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:34:46.141200 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:34:46.148249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:34:46.161960 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:34:46.179545 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 24 00:34:46.179566 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 24 00:34:46.185461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:34:46.210979 kernel: loop3: detected capacity change from 0 to 229808 Jan 24 00:34:46.248962 kernel: loop4: detected capacity change from 0 to 8 Jan 24 00:34:46.252954 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:34:46.277975 kernel: loop6: detected capacity change from 0 to 142488 Jan 24 00:34:46.297970 kernel: loop7: detected capacity change from 0 to 229808 Jan 24 00:34:46.319621 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 24 00:34:46.321612 (sd-merge)[1201]: Merged extensions into '/usr'. Jan 24 00:34:46.325289 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:34:46.325305 systemd[1]: Reloading... Jan 24 00:34:46.388963 zram_generator::config[1224]: No configuration found. Jan 24 00:34:46.445611 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:34:46.504301 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:34:46.540208 systemd[1]: Reloading finished in 214 ms. Jan 24 00:34:46.569548 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:34:46.570389 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:34:46.573693 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:34:46.582121 systemd[1]: Starting ensure-sysext.service... Jan 24 00:34:46.592101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:34:46.602220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:34:46.608703 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:34:46.608741 systemd[1]: Reloading... Jan 24 00:34:46.610206 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:34:46.610768 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:34:46.611687 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:34:46.612210 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 24 00:34:46.612329 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 24 00:34:46.615375 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:34:46.615450 systemd-tmpfiles[1272]: Skipping /boot Jan 24 00:34:46.637049 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:34:46.637061 systemd-tmpfiles[1272]: Skipping /boot Jan 24 00:34:46.667378 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Jan 24 00:34:46.694956 zram_generator::config[1299]: No configuration found. Jan 24 00:34:46.844235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:34:46.847292 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:34:46.881045 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:34:46.887823 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:34:46.888205 systemd[1]: Reloading finished in 278 ms. Jan 24 00:34:46.902799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:34:46.903392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:34:46.917274 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:34:46.920653 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:46.924418 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:34:46.924453 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:34:46.930975 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:34:46.931151 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:34:46.937876 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:34:46.954007 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:34:46.938064 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:34:46.939885 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:34:46.940328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:46.942373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:34:46.944128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:34:46.955083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:34:46.955524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:46.959378 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:34:46.960819 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 24 00:34:46.960851 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:34:46.988295 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 24 00:34:46.997079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:34:47.001152 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:34:47.007540 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 24 00:34:47.007591 kernel: [drm] features: -context_init Jan 24 00:34:47.009706 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:34:47.009791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:47.012716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:34:47.012858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:34:47.013575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:34:47.014057 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:34:47.014729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:34:47.015050 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:34:47.021884 kernel: [drm] number of scanouts: 1 Jan 24 00:34:47.021918 kernel: [drm] number of cap sets: 0 Jan 24 00:34:47.024954 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 24 00:34:47.042919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:47.043138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:47.050201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:34:47.052012 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1310) Jan 24 00:34:47.054174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:34:47.056180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:34:47.061651 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 24 00:34:47.061687 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:34:47.063541 augenrules[1411]: No rules Jan 24 00:34:47.075342 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 24 00:34:47.078616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:47.086686 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:34:47.089997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:47.090062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:47.093342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:34:47.099259 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:34:47.100720 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:34:47.101849 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:34:47.102376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:34:47.102498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:34:47.103135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:34:47.103260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:34:47.103783 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:34:47.103905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:34:47.127691 systemd[1]: Finished ensure-sysext.service. Jan 24 00:34:47.136072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:34:47.138740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:47.138914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:47.145206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:34:47.148043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:34:47.151433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:34:47.154235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:34:47.155072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:47.158883 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:34:47.162980 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:34:47.166150 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:34:47.166214 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:34:47.166272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:47.168058 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:34:47.169323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:34:47.170114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:34:47.174504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:34:47.175989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:34:47.177254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:34:47.179163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:47.179340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:47.191163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:47.210064 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:34:47.220059 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:34:47.236406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:34:47.237145 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:34:47.246811 systemd-resolved[1402]: Positive Trust Anchors: Jan 24 00:34:47.248960 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:34:47.248985 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:34:47.249561 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:34:47.249732 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:34:47.252876 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:34:47.253354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:34:47.256364 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:34:47.256887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:34:47.264612 systemd-resolved[1402]: Using system hostname 'ci-4081-3-6-n-56b1d28098'. Jan 24 00:34:47.266915 systemd-networkd[1401]: lo: Link UP Jan 24 00:34:47.267548 systemd-networkd[1401]: lo: Gained carrier Jan 24 00:34:47.270118 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:34:47.271357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:34:47.276045 systemd-networkd[1401]: Enumeration completed Jan 24 00:34:47.276147 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:34:47.276509 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:47.276514 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:34:47.276534 systemd[1]: Reached target network.target - Network. Jan 24 00:34:47.278250 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:47.278259 systemd-networkd[1401]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:34:47.278747 systemd-networkd[1401]: eth0: Link UP Jan 24 00:34:47.278756 systemd-networkd[1401]: eth0: Gained carrier Jan 24 00:34:47.278767 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:47.284160 systemd-networkd[1401]: eth1: Link UP Jan 24 00:34:47.284167 systemd-networkd[1401]: eth1: Gained carrier Jan 24 00:34:47.284177 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:34:47.289064 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:34:47.289771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:47.290362 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:34:47.295513 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:34:47.301347 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:34:47.307023 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:34:47.308551 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:34:47.309093 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:34:47.309522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:34:47.309927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:34:47.312325 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:34:47.312708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:34:47.312732 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:34:47.313092 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:34:47.313574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:34:47.314393 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:34:47.316464 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:34:47.319979 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:34:47.320989 systemd-networkd[1401]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:34:47.321618 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:34:47.327088 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:34:47.327095 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jan 24 00:34:47.328072 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:34:47.328486 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:34:47.328846 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:34:47.331451 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:34:47.331477 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:34:47.332544 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:34:47.334388 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:34:47.336996 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:34:47.340399 systemd-networkd[1401]: eth0: DHCPv4 address 65.21.184.255/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:34:47.341673 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jan 24 00:34:47.345101 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:34:47.348832 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:34:47.352540 jq[1471]: false Jan 24 00:34:47.351161 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:34:47.357098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:34:47.359105 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:34:47.367128 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 24 00:34:47.370094 coreos-metadata[1469]: Jan 24 00:34:47.369 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 24 00:34:47.370806 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:34:47.376124 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:34:47.383733 coreos-metadata[1469]: Jan 24 00:34:47.383 INFO Fetch successful Jan 24 00:34:47.383792 coreos-metadata[1469]: Jan 24 00:34:47.383 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 24 00:34:47.385166 coreos-metadata[1469]: Jan 24 00:34:47.384 INFO Fetch successful Jan 24 00:34:47.387434 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:34:47.389189 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:34:47.390231 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:34:47.391518 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:34:47.395260 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:34:47.398672 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:34:47.400280 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:34:47.400427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:34:47.400809 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:34:47.400973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:34:47.412524 dbus-daemon[1470]: [system] SELinux support is enabled Jan 24 00:34:47.413662 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:34:47.418836 extend-filesystems[1472]: Found loop4 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found loop5 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found loop6 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found loop7 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda1 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda2 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda3 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found usr Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda4 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda6 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda7 Jan 24 00:34:47.421617 extend-filesystems[1472]: Found sda9 Jan 24 00:34:47.421617 extend-filesystems[1472]: Checking size of /dev/sda9 Jan 24 00:34:47.469392 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 24 00:34:47.424444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:34:47.469511 extend-filesystems[1472]: Resized partition /dev/sda9 Jan 24 00:34:47.424491 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:34:47.472361 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:34:47.472825 jq[1489]: true Jan 24 00:34:47.434827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:34:47.434845 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:34:47.441172 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:34:47.441345 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:34:47.489501 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:34:47.496057 update_engine[1488]: I20260124 00:34:47.495997 1488 main.cc:92] Flatcar Update Engine starting Jan 24 00:34:47.498246 jq[1508]: true Jan 24 00:34:47.508980 tar[1495]: linux-amd64/LICENSE Jan 24 00:34:47.508980 tar[1495]: linux-amd64/helm Jan 24 00:34:47.510436 update_engine[1488]: I20260124 00:34:47.510402 1488 update_check_scheduler.cc:74] Next update check in 9m54s Jan 24 00:34:47.510611 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:34:47.519099 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:34:47.575247 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:34:47.576212 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:34:47.592425 systemd-logind[1487]: New seat seat0. Jan 24 00:34:47.595448 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:34:47.595469 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:34:47.595624 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:34:47.635787 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:34:47.637269 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:34:47.638961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1338) Jan 24 00:34:47.650993 systemd[1]: Starting sshkeys.service... Jan 24 00:34:47.695198 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:34:47.700951 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:34:47.709956 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:34:47.733199 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:34:47.743653 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:34:47.757747 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:34:47.758274 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:34:47.761223 coreos-metadata[1554]: Jan 24 00:34:47.759 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 24 00:34:47.763833 coreos-metadata[1554]: Jan 24 00:34:47.763 INFO Fetch successful Jan 24 00:34:47.766100 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:34:47.769840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:34:47.772997 unknown[1554]: wrote ssh authorized keys file for user: core Jan 24 00:34:47.778923 containerd[1515]: time="2026-01-24T00:34:47.778859704Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:34:47.789409 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:34:47.794321 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:34:47.799209 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:34:47.799716 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:34:47.810507 containerd[1515]: time="2026-01-24T00:34:47.810460886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.811848 containerd[1515]: time="2026-01-24T00:34:47.811816319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:34:47.811848 containerd[1515]: time="2026-01-24T00:34:47.811845199Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:34:47.811900 containerd[1515]: time="2026-01-24T00:34:47.811856539Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:34:47.812019 containerd[1515]: time="2026-01-24T00:34:47.812002599Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:34:47.812019 containerd[1515]: time="2026-01-24T00:34:47.812017179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812083 containerd[1515]: time="2026-01-24T00:34:47.812068029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812083 containerd[1515]: time="2026-01-24T00:34:47.812078639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812252 containerd[1515]: time="2026-01-24T00:34:47.812233969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812252 containerd[1515]: time="2026-01-24T00:34:47.812248129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812283 containerd[1515]: time="2026-01-24T00:34:47.812257649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812283 containerd[1515]: time="2026-01-24T00:34:47.812276809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.812355 containerd[1515]: time="2026-01-24T00:34:47.812340820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.815788 containerd[1515]: time="2026-01-24T00:34:47.815529115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:34:47.815788 containerd[1515]: time="2026-01-24T00:34:47.815639375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:34:47.815788 containerd[1515]: time="2026-01-24T00:34:47.815650875Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:34:47.815788 containerd[1515]: time="2026-01-24T00:34:47.815751125Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:34:47.815857 containerd[1515]: time="2026-01-24T00:34:47.815794765Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:34:47.842604 containerd[1515]: time="2026-01-24T00:34:47.842568690Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:34:47.842664 containerd[1515]: time="2026-01-24T00:34:47.842646740Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:34:47.842681 containerd[1515]: time="2026-01-24T00:34:47.842668640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:34:47.842695 containerd[1515]: time="2026-01-24T00:34:47.842681120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:34:47.842695 containerd[1515]: time="2026-01-24T00:34:47.842692230Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:34:47.842834 containerd[1515]: time="2026-01-24T00:34:47.842819050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:34:47.843207 update-ssh-keys[1572]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:34:47.844223 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.844250353Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845123194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845142314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845162634Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845175054Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845184504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845198164Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845210524Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845222434Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845239 containerd[1515]: time="2026-01-24T00:34:47.845234444Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845245704Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845256514Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845273524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845283854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845294654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845305664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845316865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845331725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845342385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845351935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845362535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845374525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845386 containerd[1515]: time="2026-01-24T00:34:47.845384835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845394325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845404755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845418075Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845435355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845448575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845457965Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845495425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845510165Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845520005Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845530625Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:34:47.845544 containerd[1515]: time="2026-01-24T00:34:47.845537695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.845694 containerd[1515]: time="2026-01-24T00:34:47.845547815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:34:47.845694 containerd[1515]: time="2026-01-24T00:34:47.845560955Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:34:47.845694 containerd[1515]: time="2026-01-24T00:34:47.845568085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:34:47.849714 systemd[1]: Finished sshkeys.service. Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.846119896Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.846262606Z" level=info msg="Connect containerd service" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.846319896Z" level=info msg="using legacy CRI server" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.846327106Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.846582157Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847425478Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847478178Z" level=info msg="Start subscribing containerd event" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847507938Z" level=info msg="Start recovering state" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847576148Z" level=info msg="Start event monitor" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847586378Z" level=info msg="Start snapshots syncer" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847604358Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.847620338Z" level=info msg="Start streaming server" Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.848227559Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.848317610Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:34:47.851010 containerd[1515]: time="2026-01-24T00:34:47.849774772Z" level=info msg="containerd successfully booted in 0.074267s" Jan 24 00:34:47.851716 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:34:47.869016 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 24 00:34:47.895953 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:34:47.895953 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:34:47.895953 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 24 00:34:47.897959 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Jan 24 00:34:47.897959 extend-filesystems[1472]: Found sr0 Jan 24 00:34:47.899135 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:34:47.900258 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:34:48.132016 tar[1495]: linux-amd64/README.md Jan 24 00:34:48.144404 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:34:48.658174 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 24 00:34:48.660153 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jan 24 00:34:48.664091 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:34:48.666547 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:34:48.679550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:34:48.689327 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:34:48.737551 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:34:49.171138 systemd-networkd[1401]: eth1: Gained IPv6LL Jan 24 00:34:49.172607 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jan 24 00:34:50.061401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:34:50.064390 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:34:50.065277 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:34:50.068432 systemd[1]: Startup finished in 1.706s (kernel) + 6.179s (initrd) + 5.107s (userspace) = 12.993s. Jan 24 00:34:50.878149 kubelet[1600]: E0124 00:34:50.878053 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:34:50.885224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:34:50.885749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:34:50.886603 systemd[1]: kubelet.service: Consumed 1.738s CPU time. Jan 24 00:34:53.598540 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:34:53.605165 systemd[1]: Started sshd@0-65.21.184.255:22-20.161.92.111:39884.service - OpenSSH per-connection server daemon (20.161.92.111:39884). Jan 24 00:34:54.365174 sshd[1611]: Accepted publickey for core from 20.161.92.111 port 39884 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:54.367132 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:54.382063 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:34:54.390304 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:34:54.395804 systemd-logind[1487]: New session 1 of user core. Jan 24 00:34:54.414443 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:34:54.426745 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:34:54.434518 (systemd)[1615]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:34:54.580535 systemd[1615]: Queued start job for default target default.target. Jan 24 00:34:54.590930 systemd[1615]: Created slice app.slice - User Application Slice. Jan 24 00:34:54.590973 systemd[1615]: Reached target paths.target - Paths. Jan 24 00:34:54.590983 systemd[1615]: Reached target timers.target - Timers. Jan 24 00:34:54.592308 systemd[1615]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:34:54.612206 systemd[1615]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:34:54.612327 systemd[1615]: Reached target sockets.target - Sockets. Jan 24 00:34:54.612341 systemd[1615]: Reached target basic.target - Basic System. Jan 24 00:34:54.612382 systemd[1615]: Reached target default.target - Main User Target. Jan 24 00:34:54.612419 systemd[1615]: Startup finished in 165ms. Jan 24 00:34:54.612651 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:34:54.620047 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:34:55.165290 systemd[1]: Started sshd@1-65.21.184.255:22-20.161.92.111:39900.service - OpenSSH per-connection server daemon (20.161.92.111:39900). Jan 24 00:34:55.937275 sshd[1626]: Accepted publickey for core from 20.161.92.111 port 39900 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:55.940420 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:55.950046 systemd-logind[1487]: New session 2 of user core. Jan 24 00:34:55.956245 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:34:56.475755 sshd[1626]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:56.481691 systemd[1]: sshd@1-65.21.184.255:22-20.161.92.111:39900.service: Deactivated successfully. Jan 24 00:34:56.485401 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:34:56.487683 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:34:56.490031 systemd-logind[1487]: Removed session 2. Jan 24 00:34:56.613357 systemd[1]: Started sshd@2-65.21.184.255:22-20.161.92.111:39916.service - OpenSSH per-connection server daemon (20.161.92.111:39916). Jan 24 00:34:57.389927 sshd[1633]: Accepted publickey for core from 20.161.92.111 port 39916 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:57.392650 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:57.400926 systemd-logind[1487]: New session 3 of user core. Jan 24 00:34:57.412221 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:34:57.921891 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:57.927893 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:34:57.929412 systemd[1]: sshd@2-65.21.184.255:22-20.161.92.111:39916.service: Deactivated successfully. Jan 24 00:34:57.932972 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:34:57.934658 systemd-logind[1487]: Removed session 3. Jan 24 00:34:58.060304 systemd[1]: Started sshd@3-65.21.184.255:22-20.161.92.111:39926.service - OpenSSH per-connection server daemon (20.161.92.111:39926). Jan 24 00:34:58.829328 sshd[1640]: Accepted publickey for core from 20.161.92.111 port 39926 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:58.832070 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:58.839707 systemd-logind[1487]: New session 4 of user core. Jan 24 00:34:58.846178 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:34:59.368086 sshd[1640]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:59.374391 systemd[1]: sshd@3-65.21.184.255:22-20.161.92.111:39926.service: Deactivated successfully. Jan 24 00:34:59.378087 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:34:59.379179 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:34:59.380727 systemd-logind[1487]: Removed session 4. Jan 24 00:34:59.505361 systemd[1]: Started sshd@4-65.21.184.255:22-20.161.92.111:39932.service - OpenSSH per-connection server daemon (20.161.92.111:39932). Jan 24 00:35:00.273265 sshd[1647]: Accepted publickey for core from 20.161.92.111 port 39932 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:35:00.276025 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:35:00.283612 systemd-logind[1487]: New session 5 of user core. Jan 24 00:35:00.289188 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:35:00.698985 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:35:00.699652 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:35:00.719831 sudo[1650]: pam_unix(sudo:session): session closed for user root Jan 24 00:35:00.842880 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 24 00:35:00.849813 systemd[1]: sshd@4-65.21.184.255:22-20.161.92.111:39932.service: Deactivated successfully. Jan 24 00:35:00.853450 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:35:00.854656 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:35:00.856380 systemd-logind[1487]: Removed session 5. Jan 24 00:35:00.934576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:35:00.941193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:00.976372 systemd[1]: Started sshd@5-65.21.184.255:22-20.161.92.111:39938.service - OpenSSH per-connection server daemon (20.161.92.111:39938). Jan 24 00:35:01.124459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:01.128077 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:35:01.157817 kubelet[1665]: E0124 00:35:01.157764 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:35:01.167230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:35:01.167404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:35:01.764398 sshd[1658]: Accepted publickey for core from 20.161.92.111 port 39938 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:35:01.767263 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:35:01.775213 systemd-logind[1487]: New session 6 of user core. Jan 24 00:35:01.785166 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:35:02.183261 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:35:02.184255 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:35:02.190759 sudo[1674]: pam_unix(sudo:session): session closed for user root Jan 24 00:35:02.202147 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:35:02.202875 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:35:02.229759 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:35:02.234446 auditctl[1677]: No rules Jan 24 00:35:02.234027 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:35:02.234429 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:35:02.244446 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:35:02.301401 augenrules[1695]: No rules Jan 24 00:35:02.304133 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:35:02.306312 sudo[1673]: pam_unix(sudo:session): session closed for user root Jan 24 00:35:02.430030 sshd[1658]: pam_unix(sshd:session): session closed for user core Jan 24 00:35:02.436636 systemd[1]: sshd@5-65.21.184.255:22-20.161.92.111:39938.service: Deactivated successfully. Jan 24 00:35:02.440183 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:35:02.441166 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:35:02.443025 systemd-logind[1487]: Removed session 6. Jan 24 00:35:02.572392 systemd[1]: Started sshd@6-65.21.184.255:22-20.161.92.111:40502.service - OpenSSH per-connection server daemon (20.161.92.111:40502). Jan 24 00:35:03.347001 sshd[1703]: Accepted publickey for core from 20.161.92.111 port 40502 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:35:03.349733 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:35:03.358029 systemd-logind[1487]: New session 7 of user core. Jan 24 00:35:03.365189 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:35:03.769447 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:35:03.770487 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:35:04.216299 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:35:04.229846 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:35:04.657915 dockerd[1722]: time="2026-01-24T00:35:04.657492362Z" level=info msg="Starting up" Jan 24 00:35:04.816750 dockerd[1722]: time="2026-01-24T00:35:04.816698157Z" level=info msg="Loading containers: start." Jan 24 00:35:04.956998 kernel: Initializing XFRM netlink socket Jan 24 00:35:04.999337 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jan 24 00:35:05.028786 systemd-timesyncd[1442]: Contacted time server 46.224.156.215:123 (2.flatcar.pool.ntp.org). Jan 24 00:35:05.029068 systemd-timesyncd[1442]: Initial clock synchronization to Sat 2026-01-24 00:35:05.395803 UTC. Jan 24 00:35:05.096703 systemd-networkd[1401]: docker0: Link UP Jan 24 00:35:05.123467 dockerd[1722]: time="2026-01-24T00:35:05.123396279Z" level=info msg="Loading containers: done." Jan 24 00:35:05.147778 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1487759087-merged.mount: Deactivated successfully. Jan 24 00:35:05.150732 dockerd[1722]: time="2026-01-24T00:35:05.149855653Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:35:05.150732 dockerd[1722]: time="2026-01-24T00:35:05.149998753Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:35:05.150732 dockerd[1722]: time="2026-01-24T00:35:05.150147253Z" level=info msg="Daemon has completed initialization" Jan 24 00:35:05.194604 dockerd[1722]: time="2026-01-24T00:35:05.194511207Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:35:05.195000 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:35:06.605740 containerd[1515]: time="2026-01-24T00:35:06.605678526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 00:35:07.254163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976695848.mount: Deactivated successfully. Jan 24 00:35:08.457835 containerd[1515]: time="2026-01-24T00:35:08.457786669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:08.458751 containerd[1515]: time="2026-01-24T00:35:08.458715235Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114812" Jan 24 00:35:08.459654 containerd[1515]: time="2026-01-24T00:35:08.459363130Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:08.462323 containerd[1515]: time="2026-01-24T00:35:08.461430452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:08.462323 containerd[1515]: time="2026-01-24T00:35:08.462210784Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.856481515s" Jan 24 00:35:08.462323 containerd[1515]: time="2026-01-24T00:35:08.462234924Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 00:35:08.462763 containerd[1515]: time="2026-01-24T00:35:08.462737782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 00:35:09.746787 containerd[1515]: time="2026-01-24T00:35:09.746739709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:09.747900 containerd[1515]: time="2026-01-24T00:35:09.747715687Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016803" Jan 24 00:35:09.748983 containerd[1515]: time="2026-01-24T00:35:09.748789060Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:09.751241 containerd[1515]: time="2026-01-24T00:35:09.751223628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:09.751862 containerd[1515]: time="2026-01-24T00:35:09.751840137Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.289068127s" Jan 24 00:35:09.751896 containerd[1515]: time="2026-01-24T00:35:09.751866504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 00:35:09.752212 containerd[1515]: time="2026-01-24T00:35:09.752195246Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 00:35:10.895510 containerd[1515]: time="2026-01-24T00:35:10.895456063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:10.896655 containerd[1515]: time="2026-01-24T00:35:10.896432723Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158124" Jan 24 00:35:10.898305 containerd[1515]: time="2026-01-24T00:35:10.897319641Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:10.899557 containerd[1515]: time="2026-01-24T00:35:10.899537104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:10.900366 containerd[1515]: time="2026-01-24T00:35:10.900347240Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.148129568s" Jan 24 00:35:10.900429 containerd[1515]: time="2026-01-24T00:35:10.900418066Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 00:35:10.901090 containerd[1515]: time="2026-01-24T00:35:10.901067529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:35:11.184599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:35:11.191274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:11.388613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:11.392898 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:35:11.428772 kubelet[1932]: E0124 00:35:11.428641 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:35:11.434130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:35:11.435396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:35:12.122822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033852485.mount: Deactivated successfully. Jan 24 00:35:12.437456 containerd[1515]: time="2026-01-24T00:35:12.437410643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:12.438509 containerd[1515]: time="2026-01-24T00:35:12.438334233Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930124" Jan 24 00:35:12.440135 containerd[1515]: time="2026-01-24T00:35:12.439383001Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:12.441492 containerd[1515]: time="2026-01-24T00:35:12.440964761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:12.441492 containerd[1515]: time="2026-01-24T00:35:12.441362471Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.540264701s" Jan 24 00:35:12.441492 containerd[1515]: time="2026-01-24T00:35:12.441384537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:35:12.441853 containerd[1515]: time="2026-01-24T00:35:12.441833135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 00:35:12.950492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756405084.mount: Deactivated successfully. Jan 24 00:35:13.983783 containerd[1515]: time="2026-01-24T00:35:13.983705416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:13.985509 containerd[1515]: time="2026-01-24T00:35:13.985140089Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jan 24 00:35:13.988850 containerd[1515]: time="2026-01-24T00:35:13.986698942Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:13.993082 containerd[1515]: time="2026-01-24T00:35:13.990822962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:13.993082 containerd[1515]: time="2026-01-24T00:35:13.992726541Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.550740603s" Jan 24 00:35:13.993082 containerd[1515]: time="2026-01-24T00:35:13.992788093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 00:35:13.993616 containerd[1515]: time="2026-01-24T00:35:13.993587818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:35:14.521890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743492558.mount: Deactivated successfully. Jan 24 00:35:14.529708 containerd[1515]: time="2026-01-24T00:35:14.529627695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:14.530931 containerd[1515]: time="2026-01-24T00:35:14.530876768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 24 00:35:14.533985 containerd[1515]: time="2026-01-24T00:35:14.532128950Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:14.535762 containerd[1515]: time="2026-01-24T00:35:14.535722380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:14.537244 containerd[1515]: time="2026-01-24T00:35:14.537193432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 543.49668ms" Jan 24 00:35:14.537356 containerd[1515]: time="2026-01-24T00:35:14.537247441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:35:14.537902 containerd[1515]: time="2026-01-24T00:35:14.537843287Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 00:35:15.087654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636769243.mount: Deactivated successfully. Jan 24 00:35:16.752832 containerd[1515]: time="2026-01-24T00:35:16.752775697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:16.754189 containerd[1515]: time="2026-01-24T00:35:16.753966634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926291" Jan 24 00:35:16.755363 containerd[1515]: time="2026-01-24T00:35:16.754976681Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:16.757253 containerd[1515]: time="2026-01-24T00:35:16.757211771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:16.757905 containerd[1515]: time="2026-01-24T00:35:16.757878227Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.219842073s" Jan 24 00:35:16.757905 containerd[1515]: time="2026-01-24T00:35:16.757903994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 00:35:19.605065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:19.613106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:19.649811 systemd[1]: Reloading requested from client PID 2087 ('systemctl') (unit session-7.scope)... Jan 24 00:35:19.649841 systemd[1]: Reloading... Jan 24 00:35:19.798740 zram_generator::config[2149]: No configuration found. Jan 24 00:35:19.861404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:35:19.923964 systemd[1]: Reloading finished in 273 ms. Jan 24 00:35:19.965863 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:35:19.965966 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:35:19.966182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:19.972537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:20.091412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:20.096652 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:35:20.128179 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:20.128179 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:35:20.128179 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:20.128179 kubelet[2180]: I0124 00:35:20.128125 2180 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:35:20.388247 kubelet[2180]: I0124 00:35:20.388116 2180 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:35:20.388247 kubelet[2180]: I0124 00:35:20.388137 2180 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:35:20.388415 kubelet[2180]: I0124 00:35:20.388304 2180 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:35:20.402978 kubelet[2180]: I0124 00:35:20.402918 2180 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:35:20.405570 kubelet[2180]: E0124 00:35:20.405536 2180 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://65.21.184.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:35:20.408821 kubelet[2180]: E0124 00:35:20.408779 2180 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:35:20.408821 kubelet[2180]: I0124 00:35:20.408804 2180 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:35:20.412083 kubelet[2180]: I0124 00:35:20.412061 2180 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:35:20.412302 kubelet[2180]: I0124 00:35:20.412260 2180 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:35:20.412417 kubelet[2180]: I0124 00:35:20.412276 2180 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-56b1d28098","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:35:20.412417 kubelet[2180]: I0124 00:35:20.412398 2180 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:35:20.412417 kubelet[2180]: I0124 00:35:20.412404 2180 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:35:20.413139 kubelet[2180]: I0124 00:35:20.413088 2180 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:20.415267 kubelet[2180]: I0124 00:35:20.415223 2180 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:35:20.415267 kubelet[2180]: I0124 00:35:20.415243 2180 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:35:20.416627 kubelet[2180]: I0124 00:35:20.415738 2180 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:35:20.416627 kubelet[2180]: I0124 00:35:20.415755 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:35:20.430971 kubelet[2180]: I0124 00:35:20.430317 2180 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:35:20.430971 kubelet[2180]: I0124 00:35:20.430720 2180 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:35:20.431299 kubelet[2180]: E0124 00:35:20.431252 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://65.21.184.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:35:20.431379 kubelet[2180]: W0124 00:35:20.431369 2180 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:35:20.431520 kubelet[2180]: E0124 00:35:20.431432 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://65.21.184.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-56b1d28098&limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:35:20.434477 kubelet[2180]: I0124 00:35:20.434445 2180 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:35:20.434477 kubelet[2180]: I0124 00:35:20.434483 2180 server.go:1289] "Started kubelet" Jan 24 00:35:20.435713 kubelet[2180]: I0124 00:35:20.435654 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:35:20.437005 kubelet[2180]: I0124 00:35:20.436204 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:35:20.437005 kubelet[2180]: I0124 00:35:20.436374 2180 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:35:20.437005 kubelet[2180]: I0124 00:35:20.436446 2180 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:35:20.438132 kubelet[2180]: I0124 00:35:20.438105 2180 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:35:20.444862 kubelet[2180]: I0124 00:35:20.444794 2180 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:35:20.445307 kubelet[2180]: I0124 00:35:20.445280 2180 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:35:20.447401 kubelet[2180]: E0124 00:35:20.447369 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:20.450478 kubelet[2180]: I0124 00:35:20.450449 2180 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:35:20.450879 kubelet[2180]: I0124 00:35:20.450859 2180 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:35:20.451860 kubelet[2180]: E0124 00:35:20.451809 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.184.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-56b1d28098?timeout=10s\": dial tcp 65.21.184.255:6443: connect: connection refused" interval="200ms" Jan 24 00:35:20.454105 kubelet[2180]: E0124 00:35:20.452094 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.21.184.255:6443/api/v1/namespaces/default/events\": dial tcp 65.21.184.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-56b1d28098.188d8398afd12d67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-56b1d28098,UID:ci-4081-3-6-n-56b1d28098,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-56b1d28098,},FirstTimestamp:2026-01-24 00:35:20.434462055 +0000 UTC m=+0.334802664,LastTimestamp:2026-01-24 00:35:20.434462055 +0000 UTC m=+0.334802664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-56b1d28098,}" Jan 24 00:35:20.455748 kubelet[2180]: E0124 00:35:20.455667 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://65.21.184.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:35:20.456223 kubelet[2180]: I0124 00:35:20.456199 2180 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:35:20.456326 kubelet[2180]: I0124 00:35:20.456312 2180 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:35:20.456529 kubelet[2180]: I0124 00:35:20.456502 2180 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:35:20.461190 kubelet[2180]: I0124 00:35:20.461147 2180 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:35:20.462979 kubelet[2180]: I0124 00:35:20.462195 2180 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:35:20.462979 kubelet[2180]: I0124 00:35:20.462222 2180 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:35:20.462979 kubelet[2180]: I0124 00:35:20.462236 2180 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:35:20.462979 kubelet[2180]: I0124 00:35:20.462243 2180 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:35:20.462979 kubelet[2180]: E0124 00:35:20.462289 2180 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:35:20.477413 kubelet[2180]: E0124 00:35:20.477361 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://65.21.184.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:35:20.478913 kubelet[2180]: E0124 00:35:20.478883 2180 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:35:20.486007 kubelet[2180]: I0124 00:35:20.485991 2180 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:35:20.486007 kubelet[2180]: I0124 00:35:20.486002 2180 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:35:20.486007 kubelet[2180]: I0124 00:35:20.486014 2180 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:20.488314 kubelet[2180]: I0124 00:35:20.488262 2180 policy_none.go:49] "None policy: Start" Jan 24 00:35:20.488314 kubelet[2180]: I0124 00:35:20.488278 2180 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:35:20.488314 kubelet[2180]: I0124 00:35:20.488288 2180 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:35:20.494886 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:35:20.505612 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:35:20.508456 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:35:20.517864 kubelet[2180]: E0124 00:35:20.517815 2180 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:35:20.518275 kubelet[2180]: I0124 00:35:20.518254 2180 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:35:20.518453 kubelet[2180]: I0124 00:35:20.518386 2180 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:35:20.519443 kubelet[2180]: I0124 00:35:20.519419 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:35:20.521566 kubelet[2180]: E0124 00:35:20.521542 2180 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:35:20.522098 kubelet[2180]: E0124 00:35:20.522061 2180 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:20.579465 systemd[1]: Created slice kubepods-burstable-pod2c12576f038ace5669437579bd26f640.slice - libcontainer container kubepods-burstable-pod2c12576f038ace5669437579bd26f640.slice. Jan 24 00:35:20.598259 kubelet[2180]: E0124 00:35:20.598164 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.607363 systemd[1]: Created slice kubepods-burstable-pod3a85900c21ac82dc552ac6e4fa04bf33.slice - libcontainer container kubepods-burstable-pod3a85900c21ac82dc552ac6e4fa04bf33.slice. Jan 24 00:35:20.612845 kubelet[2180]: E0124 00:35:20.612749 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.615469 systemd[1]: Created slice kubepods-burstable-podf120dfa562fa18b3dfaa9b6620d626fe.slice - libcontainer container kubepods-burstable-podf120dfa562fa18b3dfaa9b6620d626fe.slice. Jan 24 00:35:20.622978 kubelet[2180]: E0124 00:35:20.620971 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.625398 kubelet[2180]: I0124 00:35:20.625350 2180 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.626020 kubelet[2180]: E0124 00:35:20.625894 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.21.184.255:6443/api/v1/nodes\": dial tcp 65.21.184.255:6443: connect: connection refused" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.651844 kubelet[2180]: I0124 00:35:20.651683 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c12576f038ace5669437579bd26f640-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" (UID: \"2c12576f038ace5669437579bd26f640\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.651844 kubelet[2180]: I0124 00:35:20.651749 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c12576f038ace5669437579bd26f640-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" (UID: \"2c12576f038ace5669437579bd26f640\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.652770 kubelet[2180]: E0124 00:35:20.652562 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.184.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-56b1d28098?timeout=10s\": dial tcp 65.21.184.255:6443: connect: connection refused" interval="400ms" Jan 24 00:35:20.752697 kubelet[2180]: I0124 00:35:20.752626 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c12576f038ace5669437579bd26f640-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" (UID: \"2c12576f038ace5669437579bd26f640\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.753119 kubelet[2180]: I0124 00:35:20.753025 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.753119 kubelet[2180]: I0124 00:35:20.753079 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.753119 kubelet[2180]: I0124 00:35:20.753103 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.753467 kubelet[2180]: I0124 00:35:20.753142 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.753467 kubelet[2180]: I0124 00:35:20.753177 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.753467 kubelet[2180]: I0124 00:35:20.753196 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f120dfa562fa18b3dfaa9b6620d626fe-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-56b1d28098\" (UID: \"f120dfa562fa18b3dfaa9b6620d626fe\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.829514 kubelet[2180]: I0124 00:35:20.829420 2180 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.830042 kubelet[2180]: E0124 00:35:20.829970 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.21.184.255:6443/api/v1/nodes\": dial tcp 65.21.184.255:6443: connect: connection refused" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:20.900433 containerd[1515]: time="2026-01-24T00:35:20.900337160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-56b1d28098,Uid:2c12576f038ace5669437579bd26f640,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:20.917612 containerd[1515]: time="2026-01-24T00:35:20.917460462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-56b1d28098,Uid:3a85900c21ac82dc552ac6e4fa04bf33,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:20.926691 containerd[1515]: time="2026-01-24T00:35:20.926312697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-56b1d28098,Uid:f120dfa562fa18b3dfaa9b6620d626fe,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:21.053717 kubelet[2180]: E0124 00:35:21.053653 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.184.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-56b1d28098?timeout=10s\": dial tcp 65.21.184.255:6443: connect: connection refused" interval="800ms" Jan 24 00:35:21.232168 kubelet[2180]: I0124 00:35:21.232123 2180 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:21.232605 kubelet[2180]: E0124 00:35:21.232573 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.21.184.255:6443/api/v1/nodes\": dial tcp 65.21.184.255:6443: connect: connection refused" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:21.411866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133037638.mount: Deactivated successfully. Jan 24 00:35:21.424147 containerd[1515]: time="2026-01-24T00:35:21.424064081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:35:21.425912 containerd[1515]: time="2026-01-24T00:35:21.425826959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:35:21.427787 containerd[1515]: time="2026-01-24T00:35:21.427660331Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:35:21.430471 containerd[1515]: time="2026-01-24T00:35:21.430285218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 24 00:35:21.432972 containerd[1515]: time="2026-01-24T00:35:21.431767005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:35:21.432972 containerd[1515]: time="2026-01-24T00:35:21.431859505Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:35:21.433232 containerd[1515]: time="2026-01-24T00:35:21.433198693Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:35:21.435095 containerd[1515]: time="2026-01-24T00:35:21.435041855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.630021ms" Jan 24 00:35:21.441791 containerd[1515]: time="2026-01-24T00:35:21.441723056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:35:21.444992 containerd[1515]: time="2026-01-24T00:35:21.443582237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.137794ms" Jan 24 00:35:21.454990 containerd[1515]: time="2026-01-24T00:35:21.453225512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.671472ms" Jan 24 00:35:21.552057 kubelet[2180]: E0124 00:35:21.551855 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://65.21.184.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:35:21.571393 kubelet[2180]: E0124 00:35:21.571352 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://65.21.184.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:35:21.609471 containerd[1515]: time="2026-01-24T00:35:21.609337109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:21.610027 containerd[1515]: time="2026-01-24T00:35:21.609923088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:21.610265 containerd[1515]: time="2026-01-24T00:35:21.610170200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:21.610618 containerd[1515]: time="2026-01-24T00:35:21.610578233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:21.612396 containerd[1515]: time="2026-01-24T00:35:21.612278606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:21.614478 containerd[1515]: time="2026-01-24T00:35:21.614427843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:21.614647 containerd[1515]: time="2026-01-24T00:35:21.614616023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:21.616578 containerd[1515]: time="2026-01-24T00:35:21.616258579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:21.616578 containerd[1515]: time="2026-01-24T00:35:21.616332977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:21.616578 containerd[1515]: time="2026-01-24T00:35:21.616353162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:21.616578 containerd[1515]: time="2026-01-24T00:35:21.616473071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:21.617294 containerd[1515]: time="2026-01-24T00:35:21.616057159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:21.642071 systemd[1]: Started cri-containerd-7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817.scope - libcontainer container 7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817. Jan 24 00:35:21.657114 systemd[1]: Started cri-containerd-829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a.scope - libcontainer container 829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a. Jan 24 00:35:21.661501 systemd[1]: Started cri-containerd-9ec3599828aecc8be84364fb47947fa333d27e2dcf064ac6b126a244ea6cd919.scope - libcontainer container 9ec3599828aecc8be84364fb47947fa333d27e2dcf064ac6b126a244ea6cd919. Jan 24 00:35:21.720613 kubelet[2180]: E0124 00:35:21.720538 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://65.21.184.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-56b1d28098&limit=500&resourceVersion=0\": dial tcp 65.21.184.255:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:35:21.722649 containerd[1515]: time="2026-01-24T00:35:21.722533088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-56b1d28098,Uid:3a85900c21ac82dc552ac6e4fa04bf33,Namespace:kube-system,Attempt:0,} returns sandbox id \"829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a\"" Jan 24 00:35:21.731558 containerd[1515]: time="2026-01-24T00:35:21.731536763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-56b1d28098,Uid:f120dfa562fa18b3dfaa9b6620d626fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817\"" Jan 24 00:35:21.732576 containerd[1515]: time="2026-01-24T00:35:21.732512886Z" level=info msg="CreateContainer within sandbox \"829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:35:21.735684 containerd[1515]: time="2026-01-24T00:35:21.735579876Z" level=info msg="CreateContainer within sandbox \"7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:35:21.742166 containerd[1515]: time="2026-01-24T00:35:21.742148111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-56b1d28098,Uid:2c12576f038ace5669437579bd26f640,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ec3599828aecc8be84364fb47947fa333d27e2dcf064ac6b126a244ea6cd919\"" Jan 24 00:35:21.748306 containerd[1515]: time="2026-01-24T00:35:21.748283722Z" level=info msg="CreateContainer within sandbox \"9ec3599828aecc8be84364fb47947fa333d27e2dcf064ac6b126a244ea6cd919\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:35:21.755331 containerd[1515]: time="2026-01-24T00:35:21.755247110Z" level=info msg="CreateContainer within sandbox \"7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063\"" Jan 24 00:35:21.756019 containerd[1515]: time="2026-01-24T00:35:21.755930178Z" level=info msg="StartContainer for \"1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063\"" Jan 24 00:35:21.757617 containerd[1515]: time="2026-01-24T00:35:21.757584064Z" level=info msg="CreateContainer within sandbox \"829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839\"" Jan 24 00:35:21.759532 containerd[1515]: time="2026-01-24T00:35:21.759392392Z" level=info msg="StartContainer for \"65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839\"" Jan 24 00:35:21.765703 containerd[1515]: time="2026-01-24T00:35:21.765681700Z" level=info msg="CreateContainer within sandbox \"9ec3599828aecc8be84364fb47947fa333d27e2dcf064ac6b126a244ea6cd919\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed21d341d1d3f0233063569700512d110a591b919f4459ec168f3ad338c8c819\"" Jan 24 00:35:21.766057 containerd[1515]: time="2026-01-24T00:35:21.766043971Z" level=info msg="StartContainer for \"ed21d341d1d3f0233063569700512d110a591b919f4459ec168f3ad338c8c819\"" Jan 24 00:35:21.785181 systemd[1]: Started cri-containerd-1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063.scope - libcontainer container 1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063. Jan 24 00:35:21.804557 systemd[1]: Started cri-containerd-65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839.scope - libcontainer container 65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839. Jan 24 00:35:21.818228 systemd[1]: Started cri-containerd-ed21d341d1d3f0233063569700512d110a591b919f4459ec168f3ad338c8c819.scope - libcontainer container ed21d341d1d3f0233063569700512d110a591b919f4459ec168f3ad338c8c819. Jan 24 00:35:21.856017 kubelet[2180]: E0124 00:35:21.854996 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.184.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-56b1d28098?timeout=10s\": dial tcp 65.21.184.255:6443: connect: connection refused" interval="1.6s" Jan 24 00:35:21.859597 containerd[1515]: time="2026-01-24T00:35:21.859008500Z" level=info msg="StartContainer for \"1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063\" returns successfully" Jan 24 00:35:21.876054 containerd[1515]: time="2026-01-24T00:35:21.875639746Z" level=info msg="StartContainer for \"ed21d341d1d3f0233063569700512d110a591b919f4459ec168f3ad338c8c819\" returns successfully" Jan 24 00:35:21.883483 containerd[1515]: time="2026-01-24T00:35:21.883239373Z" level=info msg="StartContainer for \"65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839\" returns successfully" Jan 24 00:35:22.035361 kubelet[2180]: I0124 00:35:22.035316 2180 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:22.496962 kubelet[2180]: E0124 00:35:22.496711 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:22.501160 kubelet[2180]: E0124 00:35:22.500288 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:22.501795 kubelet[2180]: E0124 00:35:22.501783 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.157525 kubelet[2180]: I0124 00:35:23.157483 2180 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.157653 kubelet[2180]: E0124 00:35:23.157526 2180 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-56b1d28098\": node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.170080 kubelet[2180]: E0124 00:35:23.170047 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.270453 kubelet[2180]: E0124 00:35:23.270392 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.371405 kubelet[2180]: E0124 00:35:23.371336 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.471980 kubelet[2180]: E0124 00:35:23.471897 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.504265 kubelet[2180]: E0124 00:35:23.504185 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.504992 kubelet[2180]: E0124 00:35:23.504768 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-56b1d28098\" not found" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.572473 kubelet[2180]: E0124 00:35:23.572386 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.673575 kubelet[2180]: E0124 00:35:23.673521 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.774832 kubelet[2180]: E0124 00:35:23.774652 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.874908 kubelet[2180]: E0124 00:35:23.874820 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-56b1d28098\" not found" Jan 24 00:35:23.948242 kubelet[2180]: I0124 00:35:23.948182 2180 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.956061 kubelet[2180]: E0124 00:35:23.956006 2180 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-56b1d28098\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.956061 kubelet[2180]: I0124 00:35:23.956040 2180 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.958473 kubelet[2180]: E0124 00:35:23.958218 2180 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.958473 kubelet[2180]: I0124 00:35:23.958245 2180 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:23.960295 kubelet[2180]: E0124 00:35:23.960228 2180 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:24.429204 kubelet[2180]: I0124 00:35:24.429144 2180 apiserver.go:52] "Watching apiserver" Jan 24 00:35:24.451827 kubelet[2180]: I0124 00:35:24.451745 2180 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:35:25.584886 systemd[1]: Reloading requested from client PID 2464 ('systemctl') (unit session-7.scope)... Jan 24 00:35:25.584910 systemd[1]: Reloading... Jan 24 00:35:25.716031 zram_generator::config[2507]: No configuration found. Jan 24 00:35:25.795211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:35:25.865319 systemd[1]: Reloading finished in 279 ms. Jan 24 00:35:25.903350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:25.922261 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:35:25.922432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:25.930274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:26.064109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:26.064837 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:35:26.142073 kubelet[2555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:26.142073 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:35:26.142073 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:26.142073 kubelet[2555]: I0124 00:35:26.140424 2555 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:35:26.153397 kubelet[2555]: I0124 00:35:26.153343 2555 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:35:26.153397 kubelet[2555]: I0124 00:35:26.153378 2555 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:35:26.153714 kubelet[2555]: I0124 00:35:26.153671 2555 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:35:26.155822 kubelet[2555]: I0124 00:35:26.155770 2555 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:35:26.160170 kubelet[2555]: I0124 00:35:26.159648 2555 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:35:26.165170 kubelet[2555]: E0124 00:35:26.165093 2555 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:35:26.165170 kubelet[2555]: I0124 00:35:26.165137 2555 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:35:26.177346 kubelet[2555]: I0124 00:35:26.177294 2555 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:35:26.177874 kubelet[2555]: I0124 00:35:26.177798 2555 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:35:26.182986 kubelet[2555]: I0124 00:35:26.177876 2555 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-56b1d28098","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:35:26.182986 kubelet[2555]: I0124 00:35:26.181298 2555 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:35:26.182986 kubelet[2555]: I0124 00:35:26.181318 2555 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:35:26.182986 kubelet[2555]: I0124 00:35:26.181408 2555 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:26.182986 kubelet[2555]: I0124 00:35:26.181801 2555 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:35:26.183289 kubelet[2555]: I0124 00:35:26.181818 2555 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:35:26.183289 kubelet[2555]: I0124 00:35:26.181850 2555 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:35:26.183289 kubelet[2555]: I0124 00:35:26.181876 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:35:26.189755 kubelet[2555]: I0124 00:35:26.189679 2555 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:35:26.190562 kubelet[2555]: I0124 00:35:26.190495 2555 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:35:26.202799 kubelet[2555]: I0124 00:35:26.202451 2555 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:35:26.203187 kubelet[2555]: I0124 00:35:26.203146 2555 server.go:1289] "Started kubelet" Jan 24 00:35:26.208084 kubelet[2555]: I0124 00:35:26.207684 2555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:35:26.209663 kubelet[2555]: I0124 00:35:26.208771 2555 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:35:26.212049 kubelet[2555]: I0124 00:35:26.211922 2555 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:35:26.212810 kubelet[2555]: I0124 00:35:26.210377 2555 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:35:26.213010 kubelet[2555]: I0124 00:35:26.209823 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:35:26.214241 kubelet[2555]: I0124 00:35:26.214216 2555 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:35:26.216146 kubelet[2555]: I0124 00:35:26.214402 2555 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:35:26.216369 kubelet[2555]: I0124 00:35:26.216319 2555 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:35:26.221916 kubelet[2555]: I0124 00:35:26.221862 2555 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:35:26.226016 kubelet[2555]: I0124 00:35:26.223615 2555 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:35:26.226016 kubelet[2555]: I0124 00:35:26.225354 2555 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:35:26.234325 kubelet[2555]: I0124 00:35:26.234281 2555 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:35:26.238431 kubelet[2555]: E0124 00:35:26.238397 2555 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:35:26.270705 kubelet[2555]: I0124 00:35:26.270682 2555 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:35:26.271754 kubelet[2555]: I0124 00:35:26.271742 2555 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:35:26.271826 kubelet[2555]: I0124 00:35:26.271820 2555 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:35:26.271871 kubelet[2555]: I0124 00:35:26.271864 2555 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:35:26.271922 kubelet[2555]: I0124 00:35:26.271916 2555 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:35:26.272021 kubelet[2555]: E0124 00:35:26.272009 2555 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:35:26.308925 kubelet[2555]: I0124 00:35:26.308904 2555 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:35:26.309056 kubelet[2555]: I0124 00:35:26.309048 2555 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:35:26.309099 kubelet[2555]: I0124 00:35:26.309094 2555 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:26.309247 kubelet[2555]: I0124 00:35:26.309238 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:35:26.309291 kubelet[2555]: I0124 00:35:26.309279 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:35:26.309338 kubelet[2555]: I0124 00:35:26.309331 2555 policy_none.go:49] "None policy: Start" Jan 24 00:35:26.309370 kubelet[2555]: I0124 00:35:26.309365 2555 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:35:26.309402 kubelet[2555]: I0124 00:35:26.309397 2555 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:35:26.309493 kubelet[2555]: I0124 00:35:26.309486 2555 state_mem.go:75] "Updated machine memory state" Jan 24 00:35:26.312531 kubelet[2555]: E0124 00:35:26.312518 2555 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:35:26.312964 kubelet[2555]: I0124 00:35:26.312859 2555 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:35:26.312964 kubelet[2555]: I0124 00:35:26.312870 2555 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:35:26.313723 kubelet[2555]: E0124 00:35:26.313712 2555 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:35:26.314211 kubelet[2555]: I0124 00:35:26.314192 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:35:26.373914 kubelet[2555]: I0124 00:35:26.373811 2555 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.374901 kubelet[2555]: I0124 00:35:26.374280 2555 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.375065 kubelet[2555]: I0124 00:35:26.374479 2555 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.418068 kubelet[2555]: I0124 00:35:26.417821 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.418068 kubelet[2555]: I0124 00:35:26.417881 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.418068 kubelet[2555]: I0124 00:35:26.417914 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.421173 kubelet[2555]: I0124 00:35:26.420535 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f120dfa562fa18b3dfaa9b6620d626fe-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-56b1d28098\" (UID: \"f120dfa562fa18b3dfaa9b6620d626fe\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.421173 kubelet[2555]: I0124 00:35:26.420586 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c12576f038ace5669437579bd26f640-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" (UID: \"2c12576f038ace5669437579bd26f640\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.421173 kubelet[2555]: I0124 00:35:26.420615 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c12576f038ace5669437579bd26f640-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" (UID: \"2c12576f038ace5669437579bd26f640\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.421173 kubelet[2555]: I0124 00:35:26.420681 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c12576f038ace5669437579bd26f640-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" (UID: \"2c12576f038ace5669437579bd26f640\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.421173 kubelet[2555]: I0124 00:35:26.420741 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.421531 kubelet[2555]: I0124 00:35:26.420812 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a85900c21ac82dc552ac6e4fa04bf33-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-56b1d28098\" (UID: \"3a85900c21ac82dc552ac6e4fa04bf33\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.423012 kubelet[2555]: I0124 00:35:26.422929 2555 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.433893 kubelet[2555]: I0124 00:35:26.433584 2555 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:26.434088 kubelet[2555]: I0124 00:35:26.434032 2555 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-56b1d28098" Jan 24 00:35:27.185717 kubelet[2555]: I0124 00:35:27.184823 2555 apiserver.go:52] "Watching apiserver" Jan 24 00:35:27.217272 kubelet[2555]: I0124 00:35:27.217195 2555 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:35:27.307990 kubelet[2555]: I0124 00:35:27.304115 2555 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:27.317055 kubelet[2555]: E0124 00:35:27.316808 2555 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-56b1d28098\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" Jan 24 00:35:27.379601 kubelet[2555]: I0124 00:35:27.379455 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-56b1d28098" podStartSLOduration=1.379400139 podStartE2EDuration="1.379400139s" podCreationTimestamp="2026-01-24 00:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:27.343134338 +0000 UTC m=+1.269285294" watchObservedRunningTime="2026-01-24 00:35:27.379400139 +0000 UTC m=+1.305551096" Jan 24 00:35:27.380196 kubelet[2555]: I0124 00:35:27.380153 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-56b1d28098" podStartSLOduration=1.380098808 podStartE2EDuration="1.380098808s" podCreationTimestamp="2026-01-24 00:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:27.376597503 +0000 UTC m=+1.302748469" watchObservedRunningTime="2026-01-24 00:35:27.380098808 +0000 UTC m=+1.306249764" Jan 24 00:35:27.412802 kubelet[2555]: I0124 00:35:27.412149 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-56b1d28098" podStartSLOduration=1.4120497379999999 podStartE2EDuration="1.412049738s" podCreationTimestamp="2026-01-24 00:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:27.3953669 +0000 UTC m=+1.321517826" watchObservedRunningTime="2026-01-24 00:35:27.412049738 +0000 UTC m=+1.338200714" Jan 24 00:35:31.272296 kubelet[2555]: I0124 00:35:31.272244 2555 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:35:31.273007 containerd[1515]: time="2026-01-24T00:35:31.272730497Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:35:31.273513 kubelet[2555]: I0124 00:35:31.273290 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:35:31.453646 kubelet[2555]: I0124 00:35:31.452267 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48403d21-a535-413a-90e8-3890a73f25fe-kube-proxy\") pod \"kube-proxy-qbrnv\" (UID: \"48403d21-a535-413a-90e8-3890a73f25fe\") " pod="kube-system/kube-proxy-qbrnv" Jan 24 00:35:31.453646 kubelet[2555]: I0124 00:35:31.452318 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48403d21-a535-413a-90e8-3890a73f25fe-xtables-lock\") pod \"kube-proxy-qbrnv\" (UID: \"48403d21-a535-413a-90e8-3890a73f25fe\") " pod="kube-system/kube-proxy-qbrnv" Jan 24 00:35:31.453646 kubelet[2555]: I0124 00:35:31.452344 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48403d21-a535-413a-90e8-3890a73f25fe-lib-modules\") pod \"kube-proxy-qbrnv\" (UID: \"48403d21-a535-413a-90e8-3890a73f25fe\") " pod="kube-system/kube-proxy-qbrnv" Jan 24 00:35:31.453646 kubelet[2555]: I0124 00:35:31.452367 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66kkb\" (UniqueName: \"kubernetes.io/projected/48403d21-a535-413a-90e8-3890a73f25fe-kube-api-access-66kkb\") pod \"kube-proxy-qbrnv\" (UID: \"48403d21-a535-413a-90e8-3890a73f25fe\") " pod="kube-system/kube-proxy-qbrnv" Jan 24 00:35:31.456723 systemd[1]: Created slice kubepods-besteffort-pod48403d21_a535_413a_90e8_3890a73f25fe.slice - libcontainer container kubepods-besteffort-pod48403d21_a535_413a_90e8_3890a73f25fe.slice. Jan 24 00:35:31.561079 kubelet[2555]: E0124 00:35:31.560842 2555 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 24 00:35:31.561079 kubelet[2555]: E0124 00:35:31.560895 2555 projected.go:194] Error preparing data for projected volume kube-api-access-66kkb for pod kube-system/kube-proxy-qbrnv: configmap "kube-root-ca.crt" not found Jan 24 00:35:31.561079 kubelet[2555]: E0124 00:35:31.561011 2555 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48403d21-a535-413a-90e8-3890a73f25fe-kube-api-access-66kkb podName:48403d21-a535-413a-90e8-3890a73f25fe nodeName:}" failed. No retries permitted until 2026-01-24 00:35:32.060986272 +0000 UTC m=+5.987137218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-66kkb" (UniqueName: "kubernetes.io/projected/48403d21-a535-413a-90e8-3890a73f25fe-kube-api-access-66kkb") pod "kube-proxy-qbrnv" (UID: "48403d21-a535-413a-90e8-3890a73f25fe") : configmap "kube-root-ca.crt" not found Jan 24 00:35:32.367880 containerd[1515]: time="2026-01-24T00:35:32.367815485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbrnv,Uid:48403d21-a535-413a-90e8-3890a73f25fe,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:32.409318 containerd[1515]: time="2026-01-24T00:35:32.408691978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:32.409318 containerd[1515]: time="2026-01-24T00:35:32.408849563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:32.409318 containerd[1515]: time="2026-01-24T00:35:32.409019936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:32.409743 containerd[1515]: time="2026-01-24T00:35:32.409290772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:32.466186 systemd[1]: Started cri-containerd-bde0acac2369b138ad0f8a61d140bc133dccba4f1e376e30715e216c36c510a5.scope - libcontainer container bde0acac2369b138ad0f8a61d140bc133dccba4f1e376e30715e216c36c510a5. Jan 24 00:35:32.506438 systemd[1]: Created slice kubepods-besteffort-pod0aa4e390_1222_49f2_b874_105606b753dc.slice - libcontainer container kubepods-besteffort-pod0aa4e390_1222_49f2_b874_105606b753dc.slice. Jan 24 00:35:32.523733 containerd[1515]: time="2026-01-24T00:35:32.523704486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbrnv,Uid:48403d21-a535-413a-90e8-3890a73f25fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"bde0acac2369b138ad0f8a61d140bc133dccba4f1e376e30715e216c36c510a5\"" Jan 24 00:35:32.528830 containerd[1515]: time="2026-01-24T00:35:32.528499423Z" level=info msg="CreateContainer within sandbox \"bde0acac2369b138ad0f8a61d140bc133dccba4f1e376e30715e216c36c510a5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:35:32.552144 containerd[1515]: time="2026-01-24T00:35:32.552118010Z" level=info msg="CreateContainer within sandbox \"bde0acac2369b138ad0f8a61d140bc133dccba4f1e376e30715e216c36c510a5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"328b0f9bc589476603a345d36cfc1ed8f51b7c2a09dd0b682ecf4597af2d8ea1\"" Jan 24 00:35:32.553035 containerd[1515]: time="2026-01-24T00:35:32.552793272Z" level=info msg="StartContainer for \"328b0f9bc589476603a345d36cfc1ed8f51b7c2a09dd0b682ecf4597af2d8ea1\"" Jan 24 00:35:32.560041 kubelet[2555]: I0124 00:35:32.559962 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8pm\" (UniqueName: \"kubernetes.io/projected/0aa4e390-1222-49f2-b874-105606b753dc-kube-api-access-6v8pm\") pod \"tigera-operator-7dcd859c48-5qqk8\" (UID: \"0aa4e390-1222-49f2-b874-105606b753dc\") " pod="tigera-operator/tigera-operator-7dcd859c48-5qqk8" Jan 24 00:35:32.560041 kubelet[2555]: I0124 00:35:32.560003 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0aa4e390-1222-49f2-b874-105606b753dc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5qqk8\" (UID: \"0aa4e390-1222-49f2-b874-105606b753dc\") " pod="tigera-operator/tigera-operator-7dcd859c48-5qqk8" Jan 24 00:35:32.583193 systemd[1]: Started cri-containerd-328b0f9bc589476603a345d36cfc1ed8f51b7c2a09dd0b682ecf4597af2d8ea1.scope - libcontainer container 328b0f9bc589476603a345d36cfc1ed8f51b7c2a09dd0b682ecf4597af2d8ea1. Jan 24 00:35:32.624580 containerd[1515]: time="2026-01-24T00:35:32.624480897Z" level=info msg="StartContainer for \"328b0f9bc589476603a345d36cfc1ed8f51b7c2a09dd0b682ecf4597af2d8ea1\" returns successfully" Jan 24 00:35:32.810856 containerd[1515]: time="2026-01-24T00:35:32.810664453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5qqk8,Uid:0aa4e390-1222-49f2-b874-105606b753dc,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:35:32.848272 containerd[1515]: time="2026-01-24T00:35:32.848053624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:32.848760 containerd[1515]: time="2026-01-24T00:35:32.848277342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:32.848760 containerd[1515]: time="2026-01-24T00:35:32.848324110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:32.848760 containerd[1515]: time="2026-01-24T00:35:32.848523193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:32.887252 systemd[1]: Started cri-containerd-15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5.scope - libcontainer container 15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5. Jan 24 00:35:32.950396 containerd[1515]: time="2026-01-24T00:35:32.950351373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5qqk8,Uid:0aa4e390-1222-49f2-b874-105606b753dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\"" Jan 24 00:35:32.952503 containerd[1515]: time="2026-01-24T00:35:32.952482241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:35:33.075861 update_engine[1488]: I20260124 00:35:33.075785 1488 update_attempter.cc:509] Updating boot flags... Jan 24 00:35:33.144077 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2866) Jan 24 00:35:33.177484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032302102.mount: Deactivated successfully. Jan 24 00:35:33.210283 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2867) Jan 24 00:35:33.326899 kubelet[2555]: I0124 00:35:33.326810 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbrnv" podStartSLOduration=2.326788354 podStartE2EDuration="2.326788354s" podCreationTimestamp="2026-01-24 00:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:33.326580435 +0000 UTC m=+7.252731381" watchObservedRunningTime="2026-01-24 00:35:33.326788354 +0000 UTC m=+7.252939310" Jan 24 00:35:34.723134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006040050.mount: Deactivated successfully. Jan 24 00:35:35.183363 containerd[1515]: time="2026-01-24T00:35:35.183237155Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:35.184258 containerd[1515]: time="2026-01-24T00:35:35.184165894Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:35:35.185794 containerd[1515]: time="2026-01-24T00:35:35.184978811Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:35.187114 containerd[1515]: time="2026-01-24T00:35:35.186584447Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:35.187114 containerd[1515]: time="2026-01-24T00:35:35.187038250Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.234530796s" Jan 24 00:35:35.187114 containerd[1515]: time="2026-01-24T00:35:35.187057829Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:35:35.190073 containerd[1515]: time="2026-01-24T00:35:35.190040721Z" level=info msg="CreateContainer within sandbox \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:35:35.199730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864096402.mount: Deactivated successfully. Jan 24 00:35:35.211888 containerd[1515]: time="2026-01-24T00:35:35.211844890Z" level=info msg="CreateContainer within sandbox \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4\"" Jan 24 00:35:35.212919 containerd[1515]: time="2026-01-24T00:35:35.212361515Z" level=info msg="StartContainer for \"bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4\"" Jan 24 00:35:35.242061 systemd[1]: Started cri-containerd-bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4.scope - libcontainer container bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4. Jan 24 00:35:35.264046 containerd[1515]: time="2026-01-24T00:35:35.263983335Z" level=info msg="StartContainer for \"bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4\" returns successfully" Jan 24 00:35:35.436345 kubelet[2555]: I0124 00:35:35.436283 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5qqk8" podStartSLOduration=1.200760799 podStartE2EDuration="3.436265149s" podCreationTimestamp="2026-01-24 00:35:32 +0000 UTC" firstStartedPulling="2026-01-24 00:35:32.952131992 +0000 UTC m=+6.878282908" lastFinishedPulling="2026-01-24 00:35:35.187636352 +0000 UTC m=+9.113787258" observedRunningTime="2026-01-24 00:35:35.33202953 +0000 UTC m=+9.258180466" watchObservedRunningTime="2026-01-24 00:35:35.436265149 +0000 UTC m=+9.362416105" Jan 24 00:35:37.498430 systemd[1]: cri-containerd-bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4.scope: Deactivated successfully. Jan 24 00:35:37.521123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4-rootfs.mount: Deactivated successfully. Jan 24 00:35:37.585659 containerd[1515]: time="2026-01-24T00:35:37.585574086Z" level=info msg="shim disconnected" id=bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4 namespace=k8s.io Jan 24 00:35:37.585659 containerd[1515]: time="2026-01-24T00:35:37.585635367Z" level=warning msg="cleaning up after shim disconnected" id=bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4 namespace=k8s.io Jan 24 00:35:37.585659 containerd[1515]: time="2026-01-24T00:35:37.585643334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:38.329455 kubelet[2555]: I0124 00:35:38.328261 2555 scope.go:117] "RemoveContainer" containerID="bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4" Jan 24 00:35:38.337722 containerd[1515]: time="2026-01-24T00:35:38.335132984Z" level=info msg="CreateContainer within sandbox \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:35:38.359999 containerd[1515]: time="2026-01-24T00:35:38.358899874Z" level=info msg="CreateContainer within sandbox \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53\"" Jan 24 00:35:38.362422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664046681.mount: Deactivated successfully. Jan 24 00:35:38.366492 containerd[1515]: time="2026-01-24T00:35:38.363593520Z" level=info msg="StartContainer for \"f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53\"" Jan 24 00:35:38.422200 systemd[1]: Started cri-containerd-f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53.scope - libcontainer container f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53. Jan 24 00:35:38.497195 containerd[1515]: time="2026-01-24T00:35:38.497165025Z" level=info msg="StartContainer for \"f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53\" returns successfully" Jan 24 00:35:40.667639 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 24 00:35:40.794426 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 24 00:35:40.802679 systemd[1]: sshd@6-65.21.184.255:22-20.161.92.111:40502.service: Deactivated successfully. Jan 24 00:35:40.807642 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:35:40.808107 systemd[1]: session-7.scope: Consumed 5.384s CPU time, 158.5M memory peak, 0B memory swap peak. Jan 24 00:35:40.809378 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:35:40.811045 systemd-logind[1487]: Removed session 7. Jan 24 00:35:47.662026 systemd[1]: Created slice kubepods-besteffort-podcf75ac5e_7124_4c73_b2c7_191c518e38e3.slice - libcontainer container kubepods-besteffort-podcf75ac5e_7124_4c73_b2c7_191c518e38e3.slice. Jan 24 00:35:47.758091 kubelet[2555]: I0124 00:35:47.757887 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cf75ac5e-7124-4c73-b2c7-191c518e38e3-typha-certs\") pod \"calico-typha-64854f4c98-mp4sj\" (UID: \"cf75ac5e-7124-4c73-b2c7-191c518e38e3\") " pod="calico-system/calico-typha-64854f4c98-mp4sj" Jan 24 00:35:47.758091 kubelet[2555]: I0124 00:35:47.757993 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf75ac5e-7124-4c73-b2c7-191c518e38e3-tigera-ca-bundle\") pod \"calico-typha-64854f4c98-mp4sj\" (UID: \"cf75ac5e-7124-4c73-b2c7-191c518e38e3\") " pod="calico-system/calico-typha-64854f4c98-mp4sj" Jan 24 00:35:47.758091 kubelet[2555]: I0124 00:35:47.758035 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8n8g\" (UniqueName: \"kubernetes.io/projected/cf75ac5e-7124-4c73-b2c7-191c518e38e3-kube-api-access-r8n8g\") pod \"calico-typha-64854f4c98-mp4sj\" (UID: \"cf75ac5e-7124-4c73-b2c7-191c518e38e3\") " pod="calico-system/calico-typha-64854f4c98-mp4sj" Jan 24 00:35:47.833721 systemd[1]: Created slice kubepods-besteffort-pod979c5a4c_6078_4dd7_b433_43af57e3b935.slice - libcontainer container kubepods-besteffort-pod979c5a4c_6078_4dd7_b433_43af57e3b935.slice. Jan 24 00:35:47.858526 kubelet[2555]: I0124 00:35:47.858465 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-cni-bin-dir\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858526 kubelet[2555]: I0124 00:35:47.858514 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-flexvol-driver-host\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858526 kubelet[2555]: I0124 00:35:47.858537 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-var-lib-calico\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858769 kubelet[2555]: I0124 00:35:47.858558 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-cni-net-dir\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858769 kubelet[2555]: I0124 00:35:47.858601 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/979c5a4c-6078-4dd7-b433-43af57e3b935-tigera-ca-bundle\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858769 kubelet[2555]: I0124 00:35:47.858622 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-cni-log-dir\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858769 kubelet[2555]: I0124 00:35:47.858658 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/979c5a4c-6078-4dd7-b433-43af57e3b935-node-certs\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.858769 kubelet[2555]: I0124 00:35:47.858686 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4g7q\" (UniqueName: \"kubernetes.io/projected/979c5a4c-6078-4dd7-b433-43af57e3b935-kube-api-access-n4g7q\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.859050 kubelet[2555]: I0124 00:35:47.858706 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-var-run-calico\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.859050 kubelet[2555]: I0124 00:35:47.858734 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-lib-modules\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.859050 kubelet[2555]: I0124 00:35:47.858752 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-policysync\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.859050 kubelet[2555]: I0124 00:35:47.858770 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/979c5a4c-6078-4dd7-b433-43af57e3b935-xtables-lock\") pod \"calico-node-2g9sh\" (UID: \"979c5a4c-6078-4dd7-b433-43af57e3b935\") " pod="calico-system/calico-node-2g9sh" Jan 24 00:35:47.961839 kubelet[2555]: E0124 00:35:47.961798 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.962601 kubelet[2555]: W0124 00:35:47.962351 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.962601 kubelet[2555]: E0124 00:35:47.962390 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.963765 kubelet[2555]: E0124 00:35:47.963743 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.963906 kubelet[2555]: W0124 00:35:47.963887 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.964223 kubelet[2555]: E0124 00:35:47.964160 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.965488 kubelet[2555]: E0124 00:35:47.965399 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.965488 kubelet[2555]: W0124 00:35:47.965419 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.965488 kubelet[2555]: E0124 00:35:47.965439 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.968107 kubelet[2555]: E0124 00:35:47.968059 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.968107 kubelet[2555]: W0124 00:35:47.968094 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.968455 kubelet[2555]: E0124 00:35:47.968120 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.973301 kubelet[2555]: E0124 00:35:47.972222 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.973301 kubelet[2555]: W0124 00:35:47.972247 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.973301 kubelet[2555]: E0124 00:35:47.972266 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.973662 containerd[1515]: time="2026-01-24T00:35:47.972364757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64854f4c98-mp4sj,Uid:cf75ac5e-7124-4c73-b2c7-191c518e38e3,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:47.975510 kubelet[2555]: E0124 00:35:47.975225 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.975510 kubelet[2555]: W0124 00:35:47.975244 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.975510 kubelet[2555]: E0124 00:35:47.975263 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.976821 kubelet[2555]: E0124 00:35:47.975926 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.980068 kubelet[2555]: W0124 00:35:47.979996 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.980068 kubelet[2555]: E0124 00:35:47.980035 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.984326 kubelet[2555]: E0124 00:35:47.983149 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.984326 kubelet[2555]: W0124 00:35:47.983176 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.984326 kubelet[2555]: E0124 00:35:47.983196 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.986321 kubelet[2555]: E0124 00:35:47.986281 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.986321 kubelet[2555]: W0124 00:35:47.986313 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.986452 kubelet[2555]: E0124 00:35:47.986331 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.987123 kubelet[2555]: E0124 00:35:47.986755 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.987123 kubelet[2555]: W0124 00:35:47.986781 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.987123 kubelet[2555]: E0124 00:35:47.986797 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.987343 kubelet[2555]: E0124 00:35:47.987205 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.987343 kubelet[2555]: W0124 00:35:47.987220 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.987343 kubelet[2555]: E0124 00:35:47.987235 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.988771 kubelet[2555]: E0124 00:35:47.987689 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.988771 kubelet[2555]: W0124 00:35:47.987732 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.988771 kubelet[2555]: E0124 00:35:47.987749 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.988771 kubelet[2555]: E0124 00:35:47.988278 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.988771 kubelet[2555]: W0124 00:35:47.988292 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.988771 kubelet[2555]: E0124 00:35:47.988341 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.990904 kubelet[2555]: E0124 00:35:47.989181 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.990904 kubelet[2555]: W0124 00:35:47.989206 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.990904 kubelet[2555]: E0124 00:35:47.989223 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.990904 kubelet[2555]: E0124 00:35:47.989821 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.990904 kubelet[2555]: W0124 00:35:47.989837 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.990904 kubelet[2555]: E0124 00:35:47.989852 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.990904 kubelet[2555]: E0124 00:35:47.990499 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.990904 kubelet[2555]: W0124 00:35:47.990516 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.990904 kubelet[2555]: E0124 00:35:47.990531 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.991312 kubelet[2555]: E0124 00:35:47.991253 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.991312 kubelet[2555]: W0124 00:35:47.991273 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.991387 kubelet[2555]: E0124 00:35:47.991293 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:47.992363 kubelet[2555]: E0124 00:35:47.992190 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:47.992363 kubelet[2555]: W0124 00:35:47.992209 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:47.992363 kubelet[2555]: E0124 00:35:47.992226 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.041416 kubelet[2555]: E0124 00:35:48.039023 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.041416 kubelet[2555]: W0124 00:35:48.039053 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.041416 kubelet[2555]: E0124 00:35:48.039078 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.048927 containerd[1515]: time="2026-01-24T00:35:48.048182382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:48.054502 containerd[1515]: time="2026-01-24T00:35:48.054104059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:48.054738 containerd[1515]: time="2026-01-24T00:35:48.054659247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:48.055397 containerd[1515]: time="2026-01-24T00:35:48.055037755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:48.055471 kubelet[2555]: E0124 00:35:48.055439 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:35:48.085116 systemd[1]: Started cri-containerd-cb4939b9993c3106f148e8866523d68056343ab020b527d8498667b4a75234b1.scope - libcontainer container cb4939b9993c3106f148e8866523d68056343ab020b527d8498667b4a75234b1. Jan 24 00:35:48.117149 containerd[1515]: time="2026-01-24T00:35:48.117068210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64854f4c98-mp4sj,Uid:cf75ac5e-7124-4c73-b2c7-191c518e38e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb4939b9993c3106f148e8866523d68056343ab020b527d8498667b4a75234b1\"" Jan 24 00:35:48.118589 containerd[1515]: time="2026-01-24T00:35:48.118396823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:35:48.139985 containerd[1515]: time="2026-01-24T00:35:48.139908453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2g9sh,Uid:979c5a4c-6078-4dd7-b433-43af57e3b935,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:48.155571 kubelet[2555]: E0124 00:35:48.155523 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.156017 kubelet[2555]: W0124 00:35:48.156001 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.156067 kubelet[2555]: E0124 00:35:48.156057 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.156660 kubelet[2555]: E0124 00:35:48.156649 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.156927 kubelet[2555]: W0124 00:35:48.156869 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.156927 kubelet[2555]: E0124 00:35:48.156884 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.157723 kubelet[2555]: E0124 00:35:48.157568 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.157723 kubelet[2555]: W0124 00:35:48.157578 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.157723 kubelet[2555]: E0124 00:35:48.157586 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.158479 kubelet[2555]: E0124 00:35:48.158469 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.158537 kubelet[2555]: W0124 00:35:48.158527 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.158611 kubelet[2555]: E0124 00:35:48.158565 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.159113 kubelet[2555]: E0124 00:35:48.159047 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.159113 kubelet[2555]: W0124 00:35:48.159058 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.159113 kubelet[2555]: E0124 00:35:48.159068 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.159759 kubelet[2555]: E0124 00:35:48.159559 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.159759 kubelet[2555]: W0124 00:35:48.159569 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.159759 kubelet[2555]: E0124 00:35:48.159576 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.160039 kubelet[2555]: E0124 00:35:48.159976 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.160039 kubelet[2555]: W0124 00:35:48.159985 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.160039 kubelet[2555]: E0124 00:35:48.159993 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.160279 kubelet[2555]: E0124 00:35:48.160270 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.160424 kubelet[2555]: W0124 00:35:48.160321 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.160424 kubelet[2555]: E0124 00:35:48.160329 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.160745 kubelet[2555]: E0124 00:35:48.160683 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.160745 kubelet[2555]: W0124 00:35:48.160691 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.160745 kubelet[2555]: E0124 00:35:48.160698 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.161839 kubelet[2555]: E0124 00:35:48.161371 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.161839 kubelet[2555]: W0124 00:35:48.161381 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.161839 kubelet[2555]: E0124 00:35:48.161388 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.162052 kubelet[2555]: E0124 00:35:48.162041 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.162097 kubelet[2555]: W0124 00:35:48.162088 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.162130 kubelet[2555]: E0124 00:35:48.162122 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.163130 kubelet[2555]: E0124 00:35:48.163019 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.163130 kubelet[2555]: W0124 00:35:48.163029 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.163130 kubelet[2555]: E0124 00:35:48.163037 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.163371 kubelet[2555]: E0124 00:35:48.163278 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.163371 kubelet[2555]: W0124 00:35:48.163287 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.163371 kubelet[2555]: E0124 00:35:48.163294 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.163980 kubelet[2555]: E0124 00:35:48.163770 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.163980 kubelet[2555]: W0124 00:35:48.163779 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.163980 kubelet[2555]: E0124 00:35:48.163787 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.164513 kubelet[2555]: E0124 00:35:48.164502 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.164610 kubelet[2555]: W0124 00:35:48.164558 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.164610 kubelet[2555]: E0124 00:35:48.164567 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.165059 kubelet[2555]: E0124 00:35:48.164990 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.165059 kubelet[2555]: W0124 00:35:48.164999 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.165059 kubelet[2555]: E0124 00:35:48.165007 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.166317 kubelet[2555]: E0124 00:35:48.166151 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.166317 kubelet[2555]: W0124 00:35:48.166161 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.166317 kubelet[2555]: E0124 00:35:48.166169 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.166317 kubelet[2555]: I0124 00:35:48.166201 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/641bc171-0396-4a65-b184-ec8db27324ea-registration-dir\") pod \"csi-node-driver-njp75\" (UID: \"641bc171-0396-4a65-b184-ec8db27324ea\") " pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:48.167480 kubelet[2555]: E0124 00:35:48.166596 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.167480 kubelet[2555]: W0124 00:35:48.166620 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.167480 kubelet[2555]: E0124 00:35:48.166706 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.167480 kubelet[2555]: I0124 00:35:48.166753 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/641bc171-0396-4a65-b184-ec8db27324ea-kubelet-dir\") pod \"csi-node-driver-njp75\" (UID: \"641bc171-0396-4a65-b184-ec8db27324ea\") " pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:48.167480 kubelet[2555]: E0124 00:35:48.167106 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.167480 kubelet[2555]: W0124 00:35:48.167115 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.167480 kubelet[2555]: E0124 00:35:48.167131 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.167480 kubelet[2555]: I0124 00:35:48.167214 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/641bc171-0396-4a65-b184-ec8db27324ea-varrun\") pod \"csi-node-driver-njp75\" (UID: \"641bc171-0396-4a65-b184-ec8db27324ea\") " pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:48.167480 kubelet[2555]: E0124 00:35:48.167389 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.167646 kubelet[2555]: W0124 00:35:48.167395 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.167646 kubelet[2555]: E0124 00:35:48.167402 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.167646 kubelet[2555]: E0124 00:35:48.167636 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.167646 kubelet[2555]: W0124 00:35:48.167643 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.167717 kubelet[2555]: E0124 00:35:48.167649 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.167974 kubelet[2555]: E0124 00:35:48.167931 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.167974 kubelet[2555]: W0124 00:35:48.167967 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.167974 kubelet[2555]: E0124 00:35:48.167974 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.168275 kubelet[2555]: E0124 00:35:48.168257 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.168275 kubelet[2555]: W0124 00:35:48.168270 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.168275 kubelet[2555]: E0124 00:35:48.168277 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.168619 kubelet[2555]: I0124 00:35:48.168380 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/641bc171-0396-4a65-b184-ec8db27324ea-socket-dir\") pod \"csi-node-driver-njp75\" (UID: \"641bc171-0396-4a65-b184-ec8db27324ea\") " pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:48.168619 kubelet[2555]: E0124 00:35:48.168566 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.168619 kubelet[2555]: W0124 00:35:48.168573 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.168619 kubelet[2555]: E0124 00:35:48.168580 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.169299 kubelet[2555]: E0124 00:35:48.169280 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.169299 kubelet[2555]: W0124 00:35:48.169294 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.169593 kubelet[2555]: E0124 00:35:48.169303 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.169593 kubelet[2555]: E0124 00:35:48.169524 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.169593 kubelet[2555]: W0124 00:35:48.169531 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.169593 kubelet[2555]: E0124 00:35:48.169537 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.169696 containerd[1515]: time="2026-01-24T00:35:48.169420573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:35:48.169696 containerd[1515]: time="2026-01-24T00:35:48.169477057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:35:48.169696 containerd[1515]: time="2026-01-24T00:35:48.169487723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:48.169696 containerd[1515]: time="2026-01-24T00:35:48.169551519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:35:48.169968 kubelet[2555]: E0124 00:35:48.169918 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.169968 kubelet[2555]: W0124 00:35:48.169930 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.169968 kubelet[2555]: E0124 00:35:48.169964 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.170615 kubelet[2555]: E0124 00:35:48.170575 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.170615 kubelet[2555]: W0124 00:35:48.170588 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.170615 kubelet[2555]: E0124 00:35:48.170595 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.171333 kubelet[2555]: E0124 00:35:48.171310 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.171333 kubelet[2555]: W0124 00:35:48.171323 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.171333 kubelet[2555]: E0124 00:35:48.171331 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.171913 kubelet[2555]: E0124 00:35:48.171892 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.171913 kubelet[2555]: W0124 00:35:48.171905 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.172106 kubelet[2555]: E0124 00:35:48.171913 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.172673 kubelet[2555]: E0124 00:35:48.172651 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.172673 kubelet[2555]: W0124 00:35:48.172663 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.172673 kubelet[2555]: E0124 00:35:48.172671 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.173600 kubelet[2555]: E0124 00:35:48.173531 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.173600 kubelet[2555]: W0124 00:35:48.173543 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.173600 kubelet[2555]: E0124 00:35:48.173550 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.193084 systemd[1]: Started cri-containerd-39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8.scope - libcontainer container 39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8. Jan 24 00:35:48.214646 containerd[1515]: time="2026-01-24T00:35:48.214307384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2g9sh,Uid:979c5a4c-6078-4dd7-b433-43af57e3b935,Namespace:calico-system,Attempt:0,} returns sandbox id \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\"" Jan 24 00:35:48.275996 kubelet[2555]: E0124 00:35:48.274919 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.275996 kubelet[2555]: W0124 00:35:48.274991 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.275996 kubelet[2555]: E0124 00:35:48.275016 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.276279 kubelet[2555]: E0124 00:35:48.276091 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.276279 kubelet[2555]: W0124 00:35:48.276108 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.276279 kubelet[2555]: E0124 00:35:48.276124 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.276462 kubelet[2555]: I0124 00:35:48.276297 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpww\" (UniqueName: \"kubernetes.io/projected/641bc171-0396-4a65-b184-ec8db27324ea-kube-api-access-hwpww\") pod \"csi-node-driver-njp75\" (UID: \"641bc171-0396-4a65-b184-ec8db27324ea\") " pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:48.278303 kubelet[2555]: E0124 00:35:48.278259 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.278303 kubelet[2555]: W0124 00:35:48.278303 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.278640 kubelet[2555]: E0124 00:35:48.278322 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.279610 kubelet[2555]: E0124 00:35:48.279549 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.279610 kubelet[2555]: W0124 00:35:48.279576 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.279740 kubelet[2555]: E0124 00:35:48.279636 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.280583 kubelet[2555]: E0124 00:35:48.280433 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.280583 kubelet[2555]: W0124 00:35:48.280453 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.280583 kubelet[2555]: E0124 00:35:48.280469 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.281406 kubelet[2555]: E0124 00:35:48.281220 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.281406 kubelet[2555]: W0124 00:35:48.281240 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.281406 kubelet[2555]: E0124 00:35:48.281255 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.282597 kubelet[2555]: E0124 00:35:48.281976 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.282597 kubelet[2555]: W0124 00:35:48.281996 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.282597 kubelet[2555]: E0124 00:35:48.282052 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.282773 kubelet[2555]: E0124 00:35:48.282639 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.282773 kubelet[2555]: W0124 00:35:48.282654 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.282773 kubelet[2555]: E0124 00:35:48.282698 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.283426 kubelet[2555]: E0124 00:35:48.283349 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.283426 kubelet[2555]: W0124 00:35:48.283385 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.283426 kubelet[2555]: E0124 00:35:48.283432 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.284205 kubelet[2555]: E0124 00:35:48.284049 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.284205 kubelet[2555]: W0124 00:35:48.284064 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.284205 kubelet[2555]: E0124 00:35:48.284079 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.284590 kubelet[2555]: E0124 00:35:48.284556 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.284683 kubelet[2555]: W0124 00:35:48.284582 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.284683 kubelet[2555]: E0124 00:35:48.284631 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.285685 kubelet[2555]: E0124 00:35:48.285446 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.285685 kubelet[2555]: W0124 00:35:48.285507 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.285685 kubelet[2555]: E0124 00:35:48.285526 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.286147 kubelet[2555]: E0124 00:35:48.286106 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.286147 kubelet[2555]: W0124 00:35:48.286132 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.286260 kubelet[2555]: E0124 00:35:48.286148 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.286773 kubelet[2555]: E0124 00:35:48.286732 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.286773 kubelet[2555]: W0124 00:35:48.286757 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.286773 kubelet[2555]: E0124 00:35:48.286773 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.287336 kubelet[2555]: E0124 00:35:48.287295 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.287336 kubelet[2555]: W0124 00:35:48.287318 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.287336 kubelet[2555]: E0124 00:35:48.287333 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.287819 kubelet[2555]: E0124 00:35:48.287779 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.287819 kubelet[2555]: W0124 00:35:48.287801 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.287819 kubelet[2555]: E0124 00:35:48.287815 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.288433 kubelet[2555]: E0124 00:35:48.288392 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.288433 kubelet[2555]: W0124 00:35:48.288415 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.288433 kubelet[2555]: E0124 00:35:48.288431 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.288918 kubelet[2555]: E0124 00:35:48.288865 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.288918 kubelet[2555]: W0124 00:35:48.288907 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.289041 kubelet[2555]: E0124 00:35:48.288921 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.289421 kubelet[2555]: E0124 00:35:48.289376 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.289421 kubelet[2555]: W0124 00:35:48.289398 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.289421 kubelet[2555]: E0124 00:35:48.289412 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.289996 kubelet[2555]: E0124 00:35:48.289884 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.289996 kubelet[2555]: W0124 00:35:48.289922 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.289996 kubelet[2555]: E0124 00:35:48.289987 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.290490 kubelet[2555]: E0124 00:35:48.290450 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.290490 kubelet[2555]: W0124 00:35:48.290475 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.290490 kubelet[2555]: E0124 00:35:48.290490 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.291029 kubelet[2555]: E0124 00:35:48.290994 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.291029 kubelet[2555]: W0124 00:35:48.291017 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.291127 kubelet[2555]: E0124 00:35:48.291033 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.291508 kubelet[2555]: E0124 00:35:48.291470 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.291508 kubelet[2555]: W0124 00:35:48.291493 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.291613 kubelet[2555]: E0124 00:35:48.291510 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.385633 kubelet[2555]: E0124 00:35:48.385583 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.385633 kubelet[2555]: W0124 00:35:48.385612 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.385633 kubelet[2555]: E0124 00:35:48.385638 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.386327 kubelet[2555]: E0124 00:35:48.386270 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.386327 kubelet[2555]: W0124 00:35:48.386303 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.386327 kubelet[2555]: E0124 00:35:48.386332 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.386898 kubelet[2555]: E0124 00:35:48.386847 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.386898 kubelet[2555]: W0124 00:35:48.386883 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.387152 kubelet[2555]: E0124 00:35:48.387008 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.387503 kubelet[2555]: E0124 00:35:48.387460 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.387503 kubelet[2555]: W0124 00:35:48.387488 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.387503 kubelet[2555]: E0124 00:35:48.387508 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.388070 kubelet[2555]: E0124 00:35:48.388021 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.388070 kubelet[2555]: W0124 00:35:48.388048 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.388070 kubelet[2555]: E0124 00:35:48.388065 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:48.402813 kubelet[2555]: E0124 00:35:48.402769 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:48.402813 kubelet[2555]: W0124 00:35:48.402795 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:48.402813 kubelet[2555]: E0124 00:35:48.402816 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:49.926073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244904733.mount: Deactivated successfully. Jan 24 00:35:50.273776 kubelet[2555]: E0124 00:35:50.273385 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:35:51.135053 containerd[1515]: time="2026-01-24T00:35:51.135010297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:51.136049 containerd[1515]: time="2026-01-24T00:35:51.135889240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:35:51.137628 containerd[1515]: time="2026-01-24T00:35:51.136838087Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:51.138890 containerd[1515]: time="2026-01-24T00:35:51.138406270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:51.138890 containerd[1515]: time="2026-01-24T00:35:51.138805556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.020381938s" Jan 24 00:35:51.138890 containerd[1515]: time="2026-01-24T00:35:51.138825646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:35:51.139563 containerd[1515]: time="2026-01-24T00:35:51.139529533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:35:51.152642 containerd[1515]: time="2026-01-24T00:35:51.152617467Z" level=info msg="CreateContainer within sandbox \"cb4939b9993c3106f148e8866523d68056343ab020b527d8498667b4a75234b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:35:51.162760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325442713.mount: Deactivated successfully. Jan 24 00:35:51.163289 containerd[1515]: time="2026-01-24T00:35:51.163222919Z" level=info msg="CreateContainer within sandbox \"cb4939b9993c3106f148e8866523d68056343ab020b527d8498667b4a75234b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a9886070c66d5898f57f54e192ad987e4f99c550d43145cc617ac0063f23671f\"" Jan 24 00:35:51.164001 containerd[1515]: time="2026-01-24T00:35:51.163981312Z" level=info msg="StartContainer for \"a9886070c66d5898f57f54e192ad987e4f99c550d43145cc617ac0063f23671f\"" Jan 24 00:35:51.189081 systemd[1]: Started cri-containerd-a9886070c66d5898f57f54e192ad987e4f99c550d43145cc617ac0063f23671f.scope - libcontainer container a9886070c66d5898f57f54e192ad987e4f99c550d43145cc617ac0063f23671f. Jan 24 00:35:51.226675 containerd[1515]: time="2026-01-24T00:35:51.226641608Z" level=info msg="StartContainer for \"a9886070c66d5898f57f54e192ad987e4f99c550d43145cc617ac0063f23671f\" returns successfully" Jan 24 00:35:51.395438 kubelet[2555]: E0124 00:35:51.395337 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.395438 kubelet[2555]: W0124 00:35:51.395358 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.395438 kubelet[2555]: E0124 00:35:51.395374 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.395800 kubelet[2555]: E0124 00:35:51.395600 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.395800 kubelet[2555]: W0124 00:35:51.395607 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.395800 kubelet[2555]: E0124 00:35:51.395615 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.395861 kubelet[2555]: E0124 00:35:51.395811 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.395861 kubelet[2555]: W0124 00:35:51.395817 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.395861 kubelet[2555]: E0124 00:35:51.395824 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.396622 kubelet[2555]: E0124 00:35:51.396046 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.396622 kubelet[2555]: W0124 00:35:51.396054 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.396622 kubelet[2555]: E0124 00:35:51.396060 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.396622 kubelet[2555]: E0124 00:35:51.396261 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.396622 kubelet[2555]: W0124 00:35:51.396267 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.396622 kubelet[2555]: E0124 00:35:51.396273 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.396748 kubelet[2555]: E0124 00:35:51.396640 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.396748 kubelet[2555]: W0124 00:35:51.396648 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.396748 kubelet[2555]: E0124 00:35:51.396656 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.396966 kubelet[2555]: E0124 00:35:51.396951 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.396966 kubelet[2555]: W0124 00:35:51.396962 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.397098 kubelet[2555]: E0124 00:35:51.396969 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.397399 kubelet[2555]: E0124 00:35:51.397385 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.397399 kubelet[2555]: W0124 00:35:51.397396 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.397454 kubelet[2555]: E0124 00:35:51.397403 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.399121 kubelet[2555]: E0124 00:35:51.399105 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.399163 kubelet[2555]: W0124 00:35:51.399118 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.399163 kubelet[2555]: E0124 00:35:51.399135 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.399356 kubelet[2555]: E0124 00:35:51.399344 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.399356 kubelet[2555]: W0124 00:35:51.399354 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.399400 kubelet[2555]: E0124 00:35:51.399361 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.399566 kubelet[2555]: E0124 00:35:51.399552 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.399566 kubelet[2555]: W0124 00:35:51.399562 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.399608 kubelet[2555]: E0124 00:35:51.399569 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.399799 kubelet[2555]: E0124 00:35:51.399786 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.399799 kubelet[2555]: W0124 00:35:51.399796 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.399840 kubelet[2555]: E0124 00:35:51.399803 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.400017 kubelet[2555]: E0124 00:35:51.400005 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.400038 kubelet[2555]: W0124 00:35:51.400017 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.400038 kubelet[2555]: E0124 00:35:51.400023 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.400209 kubelet[2555]: E0124 00:35:51.400197 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.400209 kubelet[2555]: W0124 00:35:51.400208 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.400285 kubelet[2555]: E0124 00:35:51.400215 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.401047 kubelet[2555]: E0124 00:35:51.401029 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.401047 kubelet[2555]: W0124 00:35:51.401041 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.401098 kubelet[2555]: E0124 00:35:51.401049 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.407543 kubelet[2555]: E0124 00:35:51.407525 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.407543 kubelet[2555]: W0124 00:35:51.407538 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.407625 kubelet[2555]: E0124 00:35:51.407548 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.408206 kubelet[2555]: E0124 00:35:51.408194 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.408206 kubelet[2555]: W0124 00:35:51.408204 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.408243 kubelet[2555]: E0124 00:35:51.408211 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.408446 kubelet[2555]: E0124 00:35:51.408434 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.408446 kubelet[2555]: W0124 00:35:51.408444 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.408488 kubelet[2555]: E0124 00:35:51.408450 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.409070 kubelet[2555]: E0124 00:35:51.409057 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.409070 kubelet[2555]: W0124 00:35:51.409068 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.409103 kubelet[2555]: E0124 00:35:51.409076 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.409511 kubelet[2555]: E0124 00:35:51.409498 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.409511 kubelet[2555]: W0124 00:35:51.409509 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.409553 kubelet[2555]: E0124 00:35:51.409518 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.409949 kubelet[2555]: E0124 00:35:51.409925 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.409988 kubelet[2555]: W0124 00:35:51.409977 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.410013 kubelet[2555]: E0124 00:35:51.409988 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.410552 kubelet[2555]: E0124 00:35:51.410538 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.410583 kubelet[2555]: W0124 00:35:51.410552 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.410583 kubelet[2555]: E0124 00:35:51.410561 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.410992 kubelet[2555]: E0124 00:35:51.410979 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.410992 kubelet[2555]: W0124 00:35:51.410990 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.411032 kubelet[2555]: E0124 00:35:51.410997 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.411869 kubelet[2555]: E0124 00:35:51.411830 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.411869 kubelet[2555]: W0124 00:35:51.411842 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.411927 kubelet[2555]: E0124 00:35:51.411874 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.412311 kubelet[2555]: E0124 00:35:51.412289 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.412311 kubelet[2555]: W0124 00:35:51.412308 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.412362 kubelet[2555]: E0124 00:35:51.412315 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.412885 kubelet[2555]: E0124 00:35:51.412855 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.412885 kubelet[2555]: W0124 00:35:51.412867 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.412885 kubelet[2555]: E0124 00:35:51.412874 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.413310 kubelet[2555]: E0124 00:35:51.413296 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.413310 kubelet[2555]: W0124 00:35:51.413307 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.413353 kubelet[2555]: E0124 00:35:51.413315 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.413836 kubelet[2555]: E0124 00:35:51.413819 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.413879 kubelet[2555]: W0124 00:35:51.413831 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.413879 kubelet[2555]: E0124 00:35:51.413856 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.414538 kubelet[2555]: E0124 00:35:51.414520 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.414538 kubelet[2555]: W0124 00:35:51.414533 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.414583 kubelet[2555]: E0124 00:35:51.414541 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.415707 kubelet[2555]: E0124 00:35:51.415687 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.415757 kubelet[2555]: W0124 00:35:51.415701 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.415757 kubelet[2555]: E0124 00:35:51.415723 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.415957 kubelet[2555]: E0124 00:35:51.415932 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.415983 kubelet[2555]: W0124 00:35:51.415960 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.415983 kubelet[2555]: E0124 00:35:51.415967 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.417102 kubelet[2555]: E0124 00:35:51.417087 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.417102 kubelet[2555]: W0124 00:35:51.417099 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.417144 kubelet[2555]: E0124 00:35:51.417116 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:51.417380 kubelet[2555]: E0124 00:35:51.417366 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:51.417380 kubelet[2555]: W0124 00:35:51.417377 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:51.417425 kubelet[2555]: E0124 00:35:51.417384 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.273997 kubelet[2555]: E0124 00:35:52.272878 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:35:52.363624 kubelet[2555]: I0124 00:35:52.363551 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:35:52.408615 kubelet[2555]: E0124 00:35:52.408460 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.408615 kubelet[2555]: W0124 00:35:52.408495 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.408615 kubelet[2555]: E0124 00:35:52.408522 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.410138 kubelet[2555]: E0124 00:35:52.409014 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.410138 kubelet[2555]: W0124 00:35:52.409030 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.410138 kubelet[2555]: E0124 00:35:52.409047 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.410138 kubelet[2555]: E0124 00:35:52.409484 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.410138 kubelet[2555]: W0124 00:35:52.409498 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.410138 kubelet[2555]: E0124 00:35:52.409513 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.410138 kubelet[2555]: E0124 00:35:52.409932 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.410138 kubelet[2555]: W0124 00:35:52.409992 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.410138 kubelet[2555]: E0124 00:35:52.410015 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.411370 kubelet[2555]: E0124 00:35:52.410485 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.411370 kubelet[2555]: W0124 00:35:52.410500 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.411370 kubelet[2555]: E0124 00:35:52.410515 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.411370 kubelet[2555]: E0124 00:35:52.410916 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.411370 kubelet[2555]: W0124 00:35:52.410929 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.411370 kubelet[2555]: E0124 00:35:52.410986 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.412195 kubelet[2555]: E0124 00:35:52.411400 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.412195 kubelet[2555]: W0124 00:35:52.411414 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.412195 kubelet[2555]: E0124 00:35:52.411428 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.412195 kubelet[2555]: E0124 00:35:52.412002 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.412195 kubelet[2555]: W0124 00:35:52.412045 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.412195 kubelet[2555]: E0124 00:35:52.412063 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.412662 kubelet[2555]: E0124 00:35:52.412533 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.412662 kubelet[2555]: W0124 00:35:52.412552 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.412662 kubelet[2555]: E0124 00:35:52.412566 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.413295 kubelet[2555]: E0124 00:35:52.413190 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.413295 kubelet[2555]: W0124 00:35:52.413206 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.413295 kubelet[2555]: E0124 00:35:52.413223 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.413772 kubelet[2555]: E0124 00:35:52.413755 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.413772 kubelet[2555]: W0124 00:35:52.413771 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.414012 kubelet[2555]: E0124 00:35:52.413786 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.414603 kubelet[2555]: E0124 00:35:52.414375 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.414603 kubelet[2555]: W0124 00:35:52.414399 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.414603 kubelet[2555]: E0124 00:35:52.414420 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.415383 kubelet[2555]: E0124 00:35:52.415360 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.415851 kubelet[2555]: W0124 00:35:52.415805 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.415851 kubelet[2555]: E0124 00:35:52.415842 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.416450 kubelet[2555]: E0124 00:35:52.416393 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.416450 kubelet[2555]: W0124 00:35:52.416416 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.416450 kubelet[2555]: E0124 00:35:52.416432 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.417021 kubelet[2555]: E0124 00:35:52.416928 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.417021 kubelet[2555]: W0124 00:35:52.417008 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.417198 kubelet[2555]: E0124 00:35:52.417024 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.418132 kubelet[2555]: E0124 00:35:52.418077 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.418132 kubelet[2555]: W0124 00:35:52.418121 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.418271 kubelet[2555]: E0124 00:35:52.418139 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.418768 kubelet[2555]: E0124 00:35:52.418727 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.418768 kubelet[2555]: W0124 00:35:52.418763 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.419016 kubelet[2555]: E0124 00:35:52.418786 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.419500 kubelet[2555]: E0124 00:35:52.419461 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.419500 kubelet[2555]: W0124 00:35:52.419487 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.419633 kubelet[2555]: E0124 00:35:52.419507 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.420163 kubelet[2555]: E0124 00:35:52.420124 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.420163 kubelet[2555]: W0124 00:35:52.420150 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.420282 kubelet[2555]: E0124 00:35:52.420167 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.420679 kubelet[2555]: E0124 00:35:52.420644 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.420679 kubelet[2555]: W0124 00:35:52.420664 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.420776 kubelet[2555]: E0124 00:35:52.420679 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.421240 kubelet[2555]: E0124 00:35:52.421203 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.421240 kubelet[2555]: W0124 00:35:52.421224 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.421240 kubelet[2555]: E0124 00:35:52.421239 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.421759 kubelet[2555]: E0124 00:35:52.421726 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.421759 kubelet[2555]: W0124 00:35:52.421745 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.421852 kubelet[2555]: E0124 00:35:52.421761 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.422344 kubelet[2555]: E0124 00:35:52.422304 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.422344 kubelet[2555]: W0124 00:35:52.422328 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.422344 kubelet[2555]: E0124 00:35:52.422345 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.422799 kubelet[2555]: E0124 00:35:52.422768 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.422799 kubelet[2555]: W0124 00:35:52.422788 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.422930 kubelet[2555]: E0124 00:35:52.422803 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.423331 kubelet[2555]: E0124 00:35:52.423293 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.423331 kubelet[2555]: W0124 00:35:52.423318 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.423427 kubelet[2555]: E0124 00:35:52.423335 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.423820 kubelet[2555]: E0124 00:35:52.423789 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.423820 kubelet[2555]: W0124 00:35:52.423809 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.423913 kubelet[2555]: E0124 00:35:52.423824 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.424634 kubelet[2555]: E0124 00:35:52.424601 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.424704 kubelet[2555]: W0124 00:35:52.424663 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.424704 kubelet[2555]: E0124 00:35:52.424679 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.425221 kubelet[2555]: E0124 00:35:52.425189 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.425221 kubelet[2555]: W0124 00:35:52.425210 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.425325 kubelet[2555]: E0124 00:35:52.425225 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.425712 kubelet[2555]: E0124 00:35:52.425675 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.425712 kubelet[2555]: W0124 00:35:52.425701 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.425812 kubelet[2555]: E0124 00:35:52.425720 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.426252 kubelet[2555]: E0124 00:35:52.426220 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.426252 kubelet[2555]: W0124 00:35:52.426241 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.426363 kubelet[2555]: E0124 00:35:52.426257 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.426873 kubelet[2555]: E0124 00:35:52.426849 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.426873 kubelet[2555]: W0124 00:35:52.426868 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.427012 kubelet[2555]: E0124 00:35:52.426883 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.427530 kubelet[2555]: E0124 00:35:52.427496 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.427530 kubelet[2555]: W0124 00:35:52.427517 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.427648 kubelet[2555]: E0124 00:35:52.427532 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.428065 kubelet[2555]: E0124 00:35:52.428027 2555 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:52.428065 kubelet[2555]: W0124 00:35:52.428052 2555 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:52.428204 kubelet[2555]: E0124 00:35:52.428069 2555 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:52.791648 containerd[1515]: time="2026-01-24T00:35:52.791550974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:52.793436 containerd[1515]: time="2026-01-24T00:35:52.793368776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:35:52.794620 containerd[1515]: time="2026-01-24T00:35:52.794536993Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:52.799924 containerd[1515]: time="2026-01-24T00:35:52.798517959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:52.800120 containerd[1515]: time="2026-01-24T00:35:52.800080432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.660182845s" Jan 24 00:35:52.800209 containerd[1515]: time="2026-01-24T00:35:52.800186431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:35:52.805898 containerd[1515]: time="2026-01-24T00:35:52.805795360Z" level=info msg="CreateContainer within sandbox \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:35:52.835897 containerd[1515]: time="2026-01-24T00:35:52.835805913Z" level=info msg="CreateContainer within sandbox \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12\"" Jan 24 00:35:52.836622 containerd[1515]: time="2026-01-24T00:35:52.836485412Z" level=info msg="StartContainer for \"7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12\"" Jan 24 00:35:52.898624 systemd[1]: Started cri-containerd-7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12.scope - libcontainer container 7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12. Jan 24 00:35:52.947385 containerd[1515]: time="2026-01-24T00:35:52.947256963Z" level=info msg="StartContainer for \"7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12\" returns successfully" Jan 24 00:35:52.975412 systemd[1]: cri-containerd-7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12.scope: Deactivated successfully. Jan 24 00:35:53.019751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12-rootfs.mount: Deactivated successfully. Jan 24 00:35:53.058889 containerd[1515]: time="2026-01-24T00:35:53.058730484Z" level=info msg="shim disconnected" id=7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12 namespace=k8s.io Jan 24 00:35:53.058889 containerd[1515]: time="2026-01-24T00:35:53.058788800Z" level=warning msg="cleaning up after shim disconnected" id=7a7b25a1903508569d174f662f51e191c0ce07bfece8850046d76f40cd1c5c12 namespace=k8s.io Jan 24 00:35:53.058889 containerd[1515]: time="2026-01-24T00:35:53.058801375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:53.371426 containerd[1515]: time="2026-01-24T00:35:53.370891633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:35:53.395013 kubelet[2555]: I0124 00:35:53.393053 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64854f4c98-mp4sj" podStartSLOduration=3.371719574 podStartE2EDuration="6.393024471s" podCreationTimestamp="2026-01-24 00:35:47 +0000 UTC" firstStartedPulling="2026-01-24 00:35:48.11811309 +0000 UTC m=+22.044263996" lastFinishedPulling="2026-01-24 00:35:51.139417977 +0000 UTC m=+25.065568893" observedRunningTime="2026-01-24 00:35:51.39400845 +0000 UTC m=+25.320159356" watchObservedRunningTime="2026-01-24 00:35:53.393024471 +0000 UTC m=+27.319175417" Jan 24 00:35:54.273397 kubelet[2555]: E0124 00:35:54.272873 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:35:55.811135 containerd[1515]: time="2026-01-24T00:35:55.811094798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:55.812125 containerd[1515]: time="2026-01-24T00:35:55.812036631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:35:55.812990 containerd[1515]: time="2026-01-24T00:35:55.812970729Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:55.814976 containerd[1515]: time="2026-01-24T00:35:55.814909687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:55.815753 containerd[1515]: time="2026-01-24T00:35:55.815355587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.444308505s" Jan 24 00:35:55.815753 containerd[1515]: time="2026-01-24T00:35:55.815378787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:35:55.818796 containerd[1515]: time="2026-01-24T00:35:55.818773185Z" level=info msg="CreateContainer within sandbox \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:35:55.830579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766569880.mount: Deactivated successfully. Jan 24 00:35:55.840093 containerd[1515]: time="2026-01-24T00:35:55.840062484Z" level=info msg="CreateContainer within sandbox \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541\"" Jan 24 00:35:55.841251 containerd[1515]: time="2026-01-24T00:35:55.840466389Z" level=info msg="StartContainer for \"6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541\"" Jan 24 00:35:55.878053 systemd[1]: Started cri-containerd-6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541.scope - libcontainer container 6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541. Jan 24 00:35:55.902350 containerd[1515]: time="2026-01-24T00:35:55.902313779Z" level=info msg="StartContainer for \"6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541\" returns successfully" Jan 24 00:35:56.276854 kubelet[2555]: E0124 00:35:56.276814 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:35:56.346921 containerd[1515]: time="2026-01-24T00:35:56.346867582Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:35:56.350071 systemd[1]: cri-containerd-6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541.scope: Deactivated successfully. Jan 24 00:35:56.367849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541-rootfs.mount: Deactivated successfully. Jan 24 00:35:56.423741 kubelet[2555]: I0124 00:35:56.423721 2555 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:35:56.439631 containerd[1515]: time="2026-01-24T00:35:56.439115684Z" level=info msg="shim disconnected" id=6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541 namespace=k8s.io Jan 24 00:35:56.439631 containerd[1515]: time="2026-01-24T00:35:56.439157181Z" level=warning msg="cleaning up after shim disconnected" id=6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541 namespace=k8s.io Jan 24 00:35:56.439631 containerd[1515]: time="2026-01-24T00:35:56.439164454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:56.440582 containerd[1515]: time="2026-01-24T00:35:56.439137762Z" level=error msg="collecting metrics for 6380f7c21ef52a4ea18d70528637761cb5e4f16317be4e30c12328927cb7f541" error="ttrpc: closed: unknown" Jan 24 00:35:56.456149 containerd[1515]: time="2026-01-24T00:35:56.455983190Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:35:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:35:56.477547 systemd[1]: Created slice kubepods-burstable-pod445371de_a1b4_4071_8903_a5dc58d21e9e.slice - libcontainer container kubepods-burstable-pod445371de_a1b4_4071_8903_a5dc58d21e9e.slice. Jan 24 00:35:56.489771 systemd[1]: Created slice kubepods-besteffort-pod170e1ff7_7c24_4c14_911e_0eb52bdbb523.slice - libcontainer container kubepods-besteffort-pod170e1ff7_7c24_4c14_911e_0eb52bdbb523.slice. Jan 24 00:35:56.498620 systemd[1]: Created slice kubepods-besteffort-pod3380c1f2_8b6a_4c4c_8029_b87f9aa9e7d9.slice - libcontainer container kubepods-besteffort-pod3380c1f2_8b6a_4c4c_8029_b87f9aa9e7d9.slice. Jan 24 00:35:56.503748 systemd[1]: Created slice kubepods-burstable-podf50723bb_0fb2_4f3f_b014_3c7c00d05077.slice - libcontainer container kubepods-burstable-podf50723bb_0fb2_4f3f_b014_3c7c00d05077.slice. Jan 24 00:35:56.510356 systemd[1]: Created slice kubepods-besteffort-podf4761a40_d4c5_46a6_ba7c_5af41f9766d5.slice - libcontainer container kubepods-besteffort-podf4761a40_d4c5_46a6_ba7c_5af41f9766d5.slice. Jan 24 00:35:56.516013 systemd[1]: Created slice kubepods-besteffort-pod639522bb_4ded_4c6d_8204_2dc920251ed9.slice - libcontainer container kubepods-besteffort-pod639522bb_4ded_4c6d_8204_2dc920251ed9.slice. Jan 24 00:35:56.526982 systemd[1]: Created slice kubepods-besteffort-pod88376c0e_7993_4786_9815_0474220bc333.slice - libcontainer container kubepods-besteffort-pod88376c0e_7993_4786_9815_0474220bc333.slice. Jan 24 00:35:56.548442 kubelet[2555]: I0124 00:35:56.548249 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/639522bb-4ded-4c6d-8204-2dc920251ed9-goldmane-ca-bundle\") pod \"goldmane-666569f655-v8smt\" (UID: \"639522bb-4ded-4c6d-8204-2dc920251ed9\") " pod="calico-system/goldmane-666569f655-v8smt" Jan 24 00:35:56.548600 kubelet[2555]: I0124 00:35:56.548589 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/639522bb-4ded-4c6d-8204-2dc920251ed9-goldmane-key-pair\") pod \"goldmane-666569f655-v8smt\" (UID: \"639522bb-4ded-4c6d-8204-2dc920251ed9\") " pod="calico-system/goldmane-666569f655-v8smt" Jan 24 00:35:56.548662 kubelet[2555]: I0124 00:35:56.548654 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9-tigera-ca-bundle\") pod \"calico-kube-controllers-6494d5bd79-znrpb\" (UID: \"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9\") " pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" Jan 24 00:35:56.548709 kubelet[2555]: I0124 00:35:56.548700 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f50723bb-0fb2-4f3f-b014-3c7c00d05077-config-volume\") pod \"coredns-674b8bbfcf-wtrng\" (UID: \"f50723bb-0fb2-4f3f-b014-3c7c00d05077\") " pod="kube-system/coredns-674b8bbfcf-wtrng" Jan 24 00:35:56.548767 kubelet[2555]: I0124 00:35:56.548759 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbx6l\" (UniqueName: \"kubernetes.io/projected/f50723bb-0fb2-4f3f-b014-3c7c00d05077-kube-api-access-kbx6l\") pod \"coredns-674b8bbfcf-wtrng\" (UID: \"f50723bb-0fb2-4f3f-b014-3c7c00d05077\") " pod="kube-system/coredns-674b8bbfcf-wtrng" Jan 24 00:35:56.548810 kubelet[2555]: I0124 00:35:56.548803 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtcg\" (UniqueName: \"kubernetes.io/projected/170e1ff7-7c24-4c14-911e-0eb52bdbb523-kube-api-access-bjtcg\") pod \"whisker-6f8b46cc7d-cmcg7\" (UID: \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\") " pod="calico-system/whisker-6f8b46cc7d-cmcg7" Jan 24 00:35:56.548858 kubelet[2555]: I0124 00:35:56.548850 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639522bb-4ded-4c6d-8204-2dc920251ed9-config\") pod \"goldmane-666569f655-v8smt\" (UID: \"639522bb-4ded-4c6d-8204-2dc920251ed9\") " pod="calico-system/goldmane-666569f655-v8smt" Jan 24 00:35:56.548903 kubelet[2555]: I0124 00:35:56.548889 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdr4x\" (UniqueName: \"kubernetes.io/projected/f4761a40-d4c5-46a6-ba7c-5af41f9766d5-kube-api-access-wdr4x\") pod \"calico-apiserver-79c764d8b9-6vp5z\" (UID: \"f4761a40-d4c5-46a6-ba7c-5af41f9766d5\") " pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" Jan 24 00:35:56.548966 kubelet[2555]: I0124 00:35:56.548958 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctxc\" (UniqueName: \"kubernetes.io/projected/3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9-kube-api-access-vctxc\") pod \"calico-kube-controllers-6494d5bd79-znrpb\" (UID: \"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9\") " pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" Jan 24 00:35:56.549039 kubelet[2555]: I0124 00:35:56.549031 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrslm\" (UniqueName: \"kubernetes.io/projected/639522bb-4ded-4c6d-8204-2dc920251ed9-kube-api-access-qrslm\") pod \"goldmane-666569f655-v8smt\" (UID: \"639522bb-4ded-4c6d-8204-2dc920251ed9\") " pod="calico-system/goldmane-666569f655-v8smt" Jan 24 00:35:56.549077 kubelet[2555]: I0124 00:35:56.549070 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4761a40-d4c5-46a6-ba7c-5af41f9766d5-calico-apiserver-certs\") pod \"calico-apiserver-79c764d8b9-6vp5z\" (UID: \"f4761a40-d4c5-46a6-ba7c-5af41f9766d5\") " pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" Jan 24 00:35:56.549135 kubelet[2555]: I0124 00:35:56.549127 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-ca-bundle\") pod \"whisker-6f8b46cc7d-cmcg7\" (UID: \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\") " pod="calico-system/whisker-6f8b46cc7d-cmcg7" Jan 24 00:35:56.549178 kubelet[2555]: I0124 00:35:56.549171 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/88376c0e-7993-4786-9815-0474220bc333-calico-apiserver-certs\") pod \"calico-apiserver-79c764d8b9-zh62f\" (UID: \"88376c0e-7993-4786-9815-0474220bc333\") " pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" Jan 24 00:35:56.549225 kubelet[2555]: I0124 00:35:56.549218 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmf64\" (UniqueName: \"kubernetes.io/projected/88376c0e-7993-4786-9815-0474220bc333-kube-api-access-wmf64\") pod \"calico-apiserver-79c764d8b9-zh62f\" (UID: \"88376c0e-7993-4786-9815-0474220bc333\") " pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" Jan 24 00:35:56.549267 kubelet[2555]: I0124 00:35:56.549258 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-backend-key-pair\") pod \"whisker-6f8b46cc7d-cmcg7\" (UID: \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\") " pod="calico-system/whisker-6f8b46cc7d-cmcg7" Jan 24 00:35:56.549314 kubelet[2555]: I0124 00:35:56.549307 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/445371de-a1b4-4071-8903-a5dc58d21e9e-config-volume\") pod \"coredns-674b8bbfcf-nznf8\" (UID: \"445371de-a1b4-4071-8903-a5dc58d21e9e\") " pod="kube-system/coredns-674b8bbfcf-nznf8" Jan 24 00:35:56.549349 kubelet[2555]: I0124 00:35:56.549342 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tzqs\" (UniqueName: \"kubernetes.io/projected/445371de-a1b4-4071-8903-a5dc58d21e9e-kube-api-access-7tzqs\") pod \"coredns-674b8bbfcf-nznf8\" (UID: \"445371de-a1b4-4071-8903-a5dc58d21e9e\") " pod="kube-system/coredns-674b8bbfcf-nznf8" Jan 24 00:35:56.787115 containerd[1515]: time="2026-01-24T00:35:56.786899359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nznf8,Uid:445371de-a1b4-4071-8903-a5dc58d21e9e,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:56.794933 containerd[1515]: time="2026-01-24T00:35:56.794865156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8b46cc7d-cmcg7,Uid:170e1ff7-7c24-4c14-911e-0eb52bdbb523,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:56.802980 containerd[1515]: time="2026-01-24T00:35:56.802377037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6494d5bd79-znrpb,Uid:3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:56.809661 containerd[1515]: time="2026-01-24T00:35:56.809576056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wtrng,Uid:f50723bb-0fb2-4f3f-b014-3c7c00d05077,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:56.817536 containerd[1515]: time="2026-01-24T00:35:56.816435894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-6vp5z,Uid:f4761a40-d4c5-46a6-ba7c-5af41f9766d5,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:35:56.829886 containerd[1515]: time="2026-01-24T00:35:56.827571069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v8smt,Uid:639522bb-4ded-4c6d-8204-2dc920251ed9,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:56.832343 containerd[1515]: time="2026-01-24T00:35:56.832298169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-zh62f,Uid:88376c0e-7993-4786-9815-0474220bc333,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:35:57.000782 containerd[1515]: time="2026-01-24T00:35:57.000739805Z" level=error msg="Failed to destroy network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.001852 containerd[1515]: time="2026-01-24T00:35:57.001828472Z" level=error msg="encountered an error cleaning up failed sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.002087 containerd[1515]: time="2026-01-24T00:35:57.002059057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8b46cc7d-cmcg7,Uid:170e1ff7-7c24-4c14-911e-0eb52bdbb523,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.002612 kubelet[2555]: E0124 00:35:57.002442 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.002612 kubelet[2555]: E0124 00:35:57.002516 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f8b46cc7d-cmcg7" Jan 24 00:35:57.002612 kubelet[2555]: E0124 00:35:57.002535 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f8b46cc7d-cmcg7" Jan 24 00:35:57.002708 kubelet[2555]: E0124 00:35:57.002583 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f8b46cc7d-cmcg7_calico-system(170e1ff7-7c24-4c14-911e-0eb52bdbb523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f8b46cc7d-cmcg7_calico-system(170e1ff7-7c24-4c14-911e-0eb52bdbb523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f8b46cc7d-cmcg7" podUID="170e1ff7-7c24-4c14-911e-0eb52bdbb523" Jan 24 00:35:57.009735 containerd[1515]: time="2026-01-24T00:35:57.009698476Z" level=error msg="Failed to destroy network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.010088 containerd[1515]: time="2026-01-24T00:35:57.010059410Z" level=error msg="encountered an error cleaning up failed sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.010157 containerd[1515]: time="2026-01-24T00:35:57.010105107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nznf8,Uid:445371de-a1b4-4071-8903-a5dc58d21e9e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.010281 kubelet[2555]: E0124 00:35:57.010233 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.010314 kubelet[2555]: E0124 00:35:57.010295 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nznf8" Jan 24 00:35:57.010333 kubelet[2555]: E0124 00:35:57.010310 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nznf8" Jan 24 00:35:57.010444 kubelet[2555]: E0124 00:35:57.010362 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nznf8_kube-system(445371de-a1b4-4071-8903-a5dc58d21e9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nznf8_kube-system(445371de-a1b4-4071-8903-a5dc58d21e9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nznf8" podUID="445371de-a1b4-4071-8903-a5dc58d21e9e" Jan 24 00:35:57.017586 containerd[1515]: time="2026-01-24T00:35:57.017472484Z" level=error msg="Failed to destroy network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.017758 containerd[1515]: time="2026-01-24T00:35:57.017737822Z" level=error msg="encountered an error cleaning up failed sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.017783 containerd[1515]: time="2026-01-24T00:35:57.017770764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6494d5bd79-znrpb,Uid:3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.017949 kubelet[2555]: E0124 00:35:57.017886 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.017949 kubelet[2555]: E0124 00:35:57.017919 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" Jan 24 00:35:57.018007 kubelet[2555]: E0124 00:35:57.017953 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" Jan 24 00:35:57.018007 kubelet[2555]: E0124 00:35:57.017984 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6494d5bd79-znrpb_calico-system(3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6494d5bd79-znrpb_calico-system(3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:35:57.045751 containerd[1515]: time="2026-01-24T00:35:57.045554672Z" level=error msg="Failed to destroy network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.045927 containerd[1515]: time="2026-01-24T00:35:57.045588614Z" level=error msg="Failed to destroy network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.046946 containerd[1515]: time="2026-01-24T00:35:57.046909533Z" level=error msg="encountered an error cleaning up failed sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.047071 containerd[1515]: time="2026-01-24T00:35:57.047007609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wtrng,Uid:f50723bb-0fb2-4f3f-b014-3c7c00d05077,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.047071 containerd[1515]: time="2026-01-24T00:35:57.047027486Z" level=error msg="encountered an error cleaning up failed sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.047128 containerd[1515]: time="2026-01-24T00:35:57.047069952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-6vp5z,Uid:f4761a40-d4c5-46a6-ba7c-5af41f9766d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.047817 kubelet[2555]: E0124 00:35:57.047492 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.047817 kubelet[2555]: E0124 00:35:57.047537 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" Jan 24 00:35:57.047817 kubelet[2555]: E0124 00:35:57.047556 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" Jan 24 00:35:57.047817 kubelet[2555]: E0124 00:35:57.047493 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.047928 kubelet[2555]: E0124 00:35:57.047592 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c764d8b9-6vp5z_calico-apiserver(f4761a40-d4c5-46a6-ba7c-5af41f9766d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c764d8b9-6vp5z_calico-apiserver(f4761a40-d4c5-46a6-ba7c-5af41f9766d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:35:57.047928 kubelet[2555]: E0124 00:35:57.047601 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wtrng" Jan 24 00:35:57.047928 kubelet[2555]: E0124 00:35:57.047616 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wtrng" Jan 24 00:35:57.048035 kubelet[2555]: E0124 00:35:57.047633 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wtrng_kube-system(f50723bb-0fb2-4f3f-b014-3c7c00d05077)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wtrng_kube-system(f50723bb-0fb2-4f3f-b014-3c7c00d05077)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wtrng" podUID="f50723bb-0fb2-4f3f-b014-3c7c00d05077" Jan 24 00:35:57.049791 containerd[1515]: time="2026-01-24T00:35:57.049435538Z" level=error msg="Failed to destroy network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.049791 containerd[1515]: time="2026-01-24T00:35:57.049703137Z" level=error msg="encountered an error cleaning up failed sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.049791 containerd[1515]: time="2026-01-24T00:35:57.049732938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-zh62f,Uid:88376c0e-7993-4786-9815-0474220bc333,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.050136 kubelet[2555]: E0124 00:35:57.050113 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.050176 kubelet[2555]: E0124 00:35:57.050138 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" Jan 24 00:35:57.050176 kubelet[2555]: E0124 00:35:57.050150 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" Jan 24 00:35:57.050218 kubelet[2555]: E0124 00:35:57.050198 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c764d8b9-zh62f_calico-apiserver(88376c0e-7993-4786-9815-0474220bc333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c764d8b9-zh62f_calico-apiserver(88376c0e-7993-4786-9815-0474220bc333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:35:57.051561 containerd[1515]: time="2026-01-24T00:35:57.051541588Z" level=error msg="Failed to destroy network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.051807 containerd[1515]: time="2026-01-24T00:35:57.051791280Z" level=error msg="encountered an error cleaning up failed sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.051838 containerd[1515]: time="2026-01-24T00:35:57.051817660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v8smt,Uid:639522bb-4ded-4c6d-8204-2dc920251ed9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.052027 kubelet[2555]: E0124 00:35:57.051921 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.052027 kubelet[2555]: E0124 00:35:57.051961 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v8smt" Jan 24 00:35:57.052027 kubelet[2555]: E0124 00:35:57.051971 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v8smt" Jan 24 00:35:57.052105 kubelet[2555]: E0124 00:35:57.051993 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-v8smt_calico-system(639522bb-4ded-4c6d-8204-2dc920251ed9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-v8smt_calico-system(639522bb-4ded-4c6d-8204-2dc920251ed9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:35:57.379142 kubelet[2555]: I0124 00:35:57.378752 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:35:57.382756 containerd[1515]: time="2026-01-24T00:35:57.379644705Z" level=info msg="StopPodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\"" Jan 24 00:35:57.382756 containerd[1515]: time="2026-01-24T00:35:57.379876031Z" level=info msg="Ensure that sandbox 86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560 in task-service has been cleanup successfully" Jan 24 00:35:57.387275 kubelet[2555]: I0124 00:35:57.386153 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:35:57.389330 containerd[1515]: time="2026-01-24T00:35:57.389275231Z" level=info msg="StopPodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\"" Jan 24 00:35:57.389598 containerd[1515]: time="2026-01-24T00:35:57.389542010Z" level=info msg="Ensure that sandbox a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d in task-service has been cleanup successfully" Jan 24 00:35:57.393301 kubelet[2555]: I0124 00:35:57.393271 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:35:57.397448 containerd[1515]: time="2026-01-24T00:35:57.396851566Z" level=info msg="StopPodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\"" Jan 24 00:35:57.397448 containerd[1515]: time="2026-01-24T00:35:57.397100988Z" level=info msg="Ensure that sandbox 3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62 in task-service has been cleanup successfully" Jan 24 00:35:57.401700 kubelet[2555]: I0124 00:35:57.401672 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:35:57.403785 containerd[1515]: time="2026-01-24T00:35:57.403748519Z" level=info msg="StopPodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\"" Jan 24 00:35:57.405333 containerd[1515]: time="2026-01-24T00:35:57.405299224Z" level=info msg="Ensure that sandbox abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff in task-service has been cleanup successfully" Jan 24 00:35:57.408355 kubelet[2555]: I0124 00:35:57.407688 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:35:57.411529 containerd[1515]: time="2026-01-24T00:35:57.410123880Z" level=info msg="StopPodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\"" Jan 24 00:35:57.411861 containerd[1515]: time="2026-01-24T00:35:57.411830492Z" level=info msg="Ensure that sandbox 9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f in task-service has been cleanup successfully" Jan 24 00:35:57.417043 kubelet[2555]: I0124 00:35:57.417016 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:35:57.420718 containerd[1515]: time="2026-01-24T00:35:57.420680309Z" level=info msg="StopPodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\"" Jan 24 00:35:57.421095 containerd[1515]: time="2026-01-24T00:35:57.421067522Z" level=info msg="Ensure that sandbox f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338 in task-service has been cleanup successfully" Jan 24 00:35:57.428821 kubelet[2555]: I0124 00:35:57.428115 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:35:57.433463 containerd[1515]: time="2026-01-24T00:35:57.433426848Z" level=info msg="StopPodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\"" Jan 24 00:35:57.434436 containerd[1515]: time="2026-01-24T00:35:57.434390914Z" level=info msg="Ensure that sandbox 0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539 in task-service has been cleanup successfully" Jan 24 00:35:57.456336 containerd[1515]: time="2026-01-24T00:35:57.455852160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:35:57.510156 containerd[1515]: time="2026-01-24T00:35:57.510097085Z" level=error msg="StopPodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" failed" error="failed to destroy network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.510412 kubelet[2555]: E0124 00:35:57.510387 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:35:57.510527 kubelet[2555]: E0124 00:35:57.510492 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338"} Jan 24 00:35:57.510631 kubelet[2555]: E0124 00:35:57.510577 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.510631 kubelet[2555]: E0124 00:35:57.510598 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:35:57.518124 containerd[1515]: time="2026-01-24T00:35:57.518074608Z" level=error msg="StopPodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" failed" error="failed to destroy network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.518488 kubelet[2555]: E0124 00:35:57.518454 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:35:57.518584 kubelet[2555]: E0124 00:35:57.518568 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff"} Jan 24 00:35:57.518652 kubelet[2555]: E0124 00:35:57.518641 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88376c0e-7993-4786-9815-0474220bc333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.518728 kubelet[2555]: E0124 00:35:57.518714 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88376c0e-7993-4786-9815-0474220bc333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:35:57.528244 containerd[1515]: time="2026-01-24T00:35:57.528148938Z" level=error msg="StopPodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" failed" error="failed to destroy network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.528376 kubelet[2555]: E0124 00:35:57.528345 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:35:57.528660 kubelet[2555]: E0124 00:35:57.528389 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560"} Jan 24 00:35:57.528660 kubelet[2555]: E0124 00:35:57.528421 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4761a40-d4c5-46a6-ba7c-5af41f9766d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.528660 kubelet[2555]: E0124 00:35:57.528444 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4761a40-d4c5-46a6-ba7c-5af41f9766d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:35:57.530810 containerd[1515]: time="2026-01-24T00:35:57.530780482Z" level=error msg="StopPodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" failed" error="failed to destroy network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.531009 kubelet[2555]: E0124 00:35:57.530988 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:35:57.531092 kubelet[2555]: E0124 00:35:57.531081 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62"} Jan 24 00:35:57.531162 kubelet[2555]: E0124 00:35:57.531150 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.531223 kubelet[2555]: E0124 00:35:57.531212 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f8b46cc7d-cmcg7" podUID="170e1ff7-7c24-4c14-911e-0eb52bdbb523" Jan 24 00:35:57.533850 containerd[1515]: time="2026-01-24T00:35:57.533829611Z" level=error msg="StopPodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" failed" error="failed to destroy network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.534124 kubelet[2555]: E0124 00:35:57.534054 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:35:57.534124 kubelet[2555]: E0124 00:35:57.534076 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d"} Jan 24 00:35:57.534124 kubelet[2555]: E0124 00:35:57.534094 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f50723bb-0fb2-4f3f-b014-3c7c00d05077\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.534124 kubelet[2555]: E0124 00:35:57.534108 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f50723bb-0fb2-4f3f-b014-3c7c00d05077\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wtrng" podUID="f50723bb-0fb2-4f3f-b014-3c7c00d05077" Jan 24 00:35:57.537826 containerd[1515]: time="2026-01-24T00:35:57.537707768Z" level=error msg="StopPodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" failed" error="failed to destroy network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.538021 kubelet[2555]: E0124 00:35:57.537931 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:35:57.538021 kubelet[2555]: E0124 00:35:57.537977 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f"} Jan 24 00:35:57.538021 kubelet[2555]: E0124 00:35:57.537993 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"639522bb-4ded-4c6d-8204-2dc920251ed9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.538021 kubelet[2555]: E0124 00:35:57.538007 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"639522bb-4ded-4c6d-8204-2dc920251ed9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:35:57.538843 containerd[1515]: time="2026-01-24T00:35:57.538810445Z" level=error msg="StopPodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" failed" error="failed to destroy network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:57.538922 kubelet[2555]: E0124 00:35:57.538897 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:35:57.538922 kubelet[2555]: E0124 00:35:57.538920 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539"} Jan 24 00:35:57.539013 kubelet[2555]: E0124 00:35:57.538997 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"445371de-a1b4-4071-8903-a5dc58d21e9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:57.539068 kubelet[2555]: E0124 00:35:57.539015 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"445371de-a1b4-4071-8903-a5dc58d21e9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nznf8" podUID="445371de-a1b4-4071-8903-a5dc58d21e9e" Jan 24 00:35:57.828616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d-shm.mount: Deactivated successfully. Jan 24 00:35:57.828715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338-shm.mount: Deactivated successfully. Jan 24 00:35:57.828773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62-shm.mount: Deactivated successfully. Jan 24 00:35:57.828826 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539-shm.mount: Deactivated successfully. Jan 24 00:35:58.286299 systemd[1]: Created slice kubepods-besteffort-pod641bc171_0396_4a65_b184_ec8db27324ea.slice - libcontainer container kubepods-besteffort-pod641bc171_0396_4a65_b184_ec8db27324ea.slice. Jan 24 00:35:58.290065 containerd[1515]: time="2026-01-24T00:35:58.290007350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njp75,Uid:641bc171-0396-4a65-b184-ec8db27324ea,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:58.343040 containerd[1515]: time="2026-01-24T00:35:58.342931100Z" level=error msg="Failed to destroy network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:58.345460 containerd[1515]: time="2026-01-24T00:35:58.345415389Z" level=error msg="encountered an error cleaning up failed sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:58.345523 containerd[1515]: time="2026-01-24T00:35:58.345490066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njp75,Uid:641bc171-0396-4a65-b184-ec8db27324ea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:58.346533 kubelet[2555]: E0124 00:35:58.345750 2555 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:58.346533 kubelet[2555]: E0124 00:35:58.345801 2555 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:58.346533 kubelet[2555]: E0124 00:35:58.345817 2555 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njp75" Jan 24 00:35:58.346642 kubelet[2555]: E0124 00:35:58.345879 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:35:58.352257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f-shm.mount: Deactivated successfully. Jan 24 00:35:58.456663 kubelet[2555]: I0124 00:35:58.455693 2555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:35:58.463170 containerd[1515]: time="2026-01-24T00:35:58.463117487Z" level=info msg="StopPodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\"" Jan 24 00:35:58.464704 containerd[1515]: time="2026-01-24T00:35:58.464437044Z" level=info msg="Ensure that sandbox 5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f in task-service has been cleanup successfully" Jan 24 00:35:58.508043 containerd[1515]: time="2026-01-24T00:35:58.507755574Z" level=error msg="StopPodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" failed" error="failed to destroy network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:58.508330 kubelet[2555]: E0124 00:35:58.508241 2555 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:35:58.508330 kubelet[2555]: E0124 00:35:58.508310 2555 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f"} Jan 24 00:35:58.508436 kubelet[2555]: E0124 00:35:58.508361 2555 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"641bc171-0396-4a65-b184-ec8db27324ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:35:58.508436 kubelet[2555]: E0124 00:35:58.508396 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"641bc171-0396-4a65-b184-ec8db27324ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:36:01.769619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303053112.mount: Deactivated successfully. Jan 24 00:36:01.801276 containerd[1515]: time="2026-01-24T00:36:01.801227144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:36:01.802435 containerd[1515]: time="2026-01-24T00:36:01.802341101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:36:01.804314 containerd[1515]: time="2026-01-24T00:36:01.803389948Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:36:01.805834 containerd[1515]: time="2026-01-24T00:36:01.805361792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:36:01.805834 containerd[1515]: time="2026-01-24T00:36:01.805738959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.349850526s" Jan 24 00:36:01.805834 containerd[1515]: time="2026-01-24T00:36:01.805767518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:36:01.828453 containerd[1515]: time="2026-01-24T00:36:01.828422935Z" level=info msg="CreateContainer within sandbox \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:36:01.841884 containerd[1515]: time="2026-01-24T00:36:01.841843954Z" level=info msg="CreateContainer within sandbox \"39fe413b913a29b29e43146b780c261e02af4e7ac85d35702ef4e44e2af4a2e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0143ea9a1abfe6302c77495f0a074d33b9cc615174a1f140ac397d907ebf0347\"" Jan 24 00:36:01.843610 containerd[1515]: time="2026-01-24T00:36:01.843562000Z" level=info msg="StartContainer for \"0143ea9a1abfe6302c77495f0a074d33b9cc615174a1f140ac397d907ebf0347\"" Jan 24 00:36:01.865040 systemd[1]: Started cri-containerd-0143ea9a1abfe6302c77495f0a074d33b9cc615174a1f140ac397d907ebf0347.scope - libcontainer container 0143ea9a1abfe6302c77495f0a074d33b9cc615174a1f140ac397d907ebf0347. Jan 24 00:36:01.892344 containerd[1515]: time="2026-01-24T00:36:01.892311153Z" level=info msg="StartContainer for \"0143ea9a1abfe6302c77495f0a074d33b9cc615174a1f140ac397d907ebf0347\" returns successfully" Jan 24 00:36:01.957213 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:36:01.957316 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:36:02.039849 containerd[1515]: time="2026-01-24T00:36:02.039412598Z" level=info msg="StopPodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\"" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.105 [INFO][3867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.106 [INFO][3867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" iface="eth0" netns="/var/run/netns/cni-07edc5eb-00ef-82fa-193e-162e51d3b1f9" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.106 [INFO][3867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" iface="eth0" netns="/var/run/netns/cni-07edc5eb-00ef-82fa-193e-162e51d3b1f9" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.106 [INFO][3867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" iface="eth0" netns="/var/run/netns/cni-07edc5eb-00ef-82fa-193e-162e51d3b1f9" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.106 [INFO][3867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.106 [INFO][3867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.136 [INFO][3880] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.137 [INFO][3880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.137 [INFO][3880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.142 [WARNING][3880] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.142 [INFO][3880] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.143 [INFO][3880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:02.149425 containerd[1515]: 2026-01-24 00:36:02.146 [INFO][3867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:02.149425 containerd[1515]: time="2026-01-24T00:36:02.149222609Z" level=info msg="TearDown network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" successfully" Jan 24 00:36:02.149425 containerd[1515]: time="2026-01-24T00:36:02.149285748Z" level=info msg="StopPodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" returns successfully" Jan 24 00:36:02.192624 kubelet[2555]: I0124 00:36:02.192596 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-backend-key-pair\") pod \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\" (UID: \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\") " Jan 24 00:36:02.193867 kubelet[2555]: I0124 00:36:02.193604 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjtcg\" (UniqueName: \"kubernetes.io/projected/170e1ff7-7c24-4c14-911e-0eb52bdbb523-kube-api-access-bjtcg\") pod \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\" (UID: \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\") " Jan 24 00:36:02.193867 kubelet[2555]: I0124 00:36:02.193854 2555 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-ca-bundle\") pod \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\" (UID: \"170e1ff7-7c24-4c14-911e-0eb52bdbb523\") " Jan 24 00:36:02.196456 kubelet[2555]: I0124 00:36:02.194407 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "170e1ff7-7c24-4c14-911e-0eb52bdbb523" (UID: "170e1ff7-7c24-4c14-911e-0eb52bdbb523"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:36:02.197252 kubelet[2555]: I0124 00:36:02.197222 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "170e1ff7-7c24-4c14-911e-0eb52bdbb523" (UID: "170e1ff7-7c24-4c14-911e-0eb52bdbb523"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:36:02.201033 kubelet[2555]: I0124 00:36:02.201010 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170e1ff7-7c24-4c14-911e-0eb52bdbb523-kube-api-access-bjtcg" (OuterVolumeSpecName: "kube-api-access-bjtcg") pod "170e1ff7-7c24-4c14-911e-0eb52bdbb523" (UID: "170e1ff7-7c24-4c14-911e-0eb52bdbb523"). InnerVolumeSpecName "kube-api-access-bjtcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:36:02.280180 systemd[1]: Removed slice kubepods-besteffort-pod170e1ff7_7c24_4c14_911e_0eb52bdbb523.slice - libcontainer container kubepods-besteffort-pod170e1ff7_7c24_4c14_911e_0eb52bdbb523.slice. Jan 24 00:36:02.294616 kubelet[2555]: I0124 00:36:02.294489 2555 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjtcg\" (UniqueName: \"kubernetes.io/projected/170e1ff7-7c24-4c14-911e-0eb52bdbb523-kube-api-access-bjtcg\") on node \"ci-4081-3-6-n-56b1d28098\" DevicePath \"\"" Jan 24 00:36:02.294616 kubelet[2555]: I0124 00:36:02.294509 2555 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-ca-bundle\") on node \"ci-4081-3-6-n-56b1d28098\" DevicePath \"\"" Jan 24 00:36:02.294616 kubelet[2555]: I0124 00:36:02.294518 2555 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/170e1ff7-7c24-4c14-911e-0eb52bdbb523-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-56b1d28098\" DevicePath \"\"" Jan 24 00:36:02.507856 kubelet[2555]: I0124 00:36:02.507783 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2g9sh" podStartSLOduration=1.916573905 podStartE2EDuration="15.507760354s" podCreationTimestamp="2026-01-24 00:35:47 +0000 UTC" firstStartedPulling="2026-01-24 00:35:48.215144535 +0000 UTC m=+22.141295441" lastFinishedPulling="2026-01-24 00:36:01.806330984 +0000 UTC m=+35.732481890" observedRunningTime="2026-01-24 00:36:02.490623261 +0000 UTC m=+36.416774217" watchObservedRunningTime="2026-01-24 00:36:02.507760354 +0000 UTC m=+36.433911310" Jan 24 00:36:02.591049 systemd[1]: Created slice kubepods-besteffort-podb07048e0_47ed_414d_b89a_27e90221643c.slice - libcontainer container kubepods-besteffort-podb07048e0_47ed_414d_b89a_27e90221643c.slice. Jan 24 00:36:02.597365 kubelet[2555]: I0124 00:36:02.597308 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07048e0-47ed-414d-b89a-27e90221643c-whisker-ca-bundle\") pod \"whisker-699d95d6f6-9xqqx\" (UID: \"b07048e0-47ed-414d-b89a-27e90221643c\") " pod="calico-system/whisker-699d95d6f6-9xqqx" Jan 24 00:36:02.597365 kubelet[2555]: I0124 00:36:02.597359 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpl9\" (UniqueName: \"kubernetes.io/projected/b07048e0-47ed-414d-b89a-27e90221643c-kube-api-access-9cpl9\") pod \"whisker-699d95d6f6-9xqqx\" (UID: \"b07048e0-47ed-414d-b89a-27e90221643c\") " pod="calico-system/whisker-699d95d6f6-9xqqx" Jan 24 00:36:02.597573 kubelet[2555]: I0124 00:36:02.597385 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b07048e0-47ed-414d-b89a-27e90221643c-whisker-backend-key-pair\") pod \"whisker-699d95d6f6-9xqqx\" (UID: \"b07048e0-47ed-414d-b89a-27e90221643c\") " pod="calico-system/whisker-699d95d6f6-9xqqx" Jan 24 00:36:02.776634 systemd[1]: run-netns-cni\x2d07edc5eb\x2d00ef\x2d82fa\x2d193e\x2d162e51d3b1f9.mount: Deactivated successfully. Jan 24 00:36:02.776813 systemd[1]: var-lib-kubelet-pods-170e1ff7\x2d7c24\x2d4c14\x2d911e\x2d0eb52bdbb523-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjtcg.mount: Deactivated successfully. Jan 24 00:36:02.777656 systemd[1]: var-lib-kubelet-pods-170e1ff7\x2d7c24\x2d4c14\x2d911e\x2d0eb52bdbb523-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:36:02.897697 containerd[1515]: time="2026-01-24T00:36:02.897492447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-699d95d6f6-9xqqx,Uid:b07048e0-47ed-414d-b89a-27e90221643c,Namespace:calico-system,Attempt:0,}" Jan 24 00:36:03.103413 systemd-networkd[1401]: cali853f45b7891: Link UP Jan 24 00:36:03.104619 systemd-networkd[1401]: cali853f45b7891: Gained carrier Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:02.958 [INFO][3906] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:02.991 [INFO][3906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0 whisker-699d95d6f6- calico-system b07048e0-47ed-414d-b89a-27e90221643c 904 0 2026-01-24 00:36:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:699d95d6f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 whisker-699d95d6f6-9xqqx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali853f45b7891 [] [] }} ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:02.991 [INFO][3906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.037 [INFO][3918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" HandleID="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.038 [INFO][3918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" HandleID="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-56b1d28098", "pod":"whisker-699d95d6f6-9xqqx", "timestamp":"2026-01-24 00:36:03.037736356 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.038 [INFO][3918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.038 [INFO][3918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.038 [INFO][3918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.049 [INFO][3918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.058 [INFO][3918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.066 [INFO][3918] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.068 [INFO][3918] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.071 [INFO][3918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.071 [INFO][3918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.073 [INFO][3918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127 Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.079 [INFO][3918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.087 [INFO][3918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.65/26] block=192.168.26.64/26 handle="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.087 [INFO][3918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.65/26] handle="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.087 [INFO][3918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:03.137497 containerd[1515]: 2026-01-24 00:36:03.087 [INFO][3918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.65/26] IPv6=[] ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" HandleID="k8s-pod-network.5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.138523 containerd[1515]: 2026-01-24 00:36:03.089 [INFO][3906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0", GenerateName:"whisker-699d95d6f6-", Namespace:"calico-system", SelfLink:"", UID:"b07048e0-47ed-414d-b89a-27e90221643c", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"699d95d6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"whisker-699d95d6f6-9xqqx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali853f45b7891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:03.138523 containerd[1515]: 2026-01-24 00:36:03.089 [INFO][3906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.65/32] ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.138523 containerd[1515]: 2026-01-24 00:36:03.090 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali853f45b7891 ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.138523 containerd[1515]: 2026-01-24 00:36:03.106 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.138523 containerd[1515]: 2026-01-24 00:36:03.106 [INFO][3906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0", GenerateName:"whisker-699d95d6f6-", Namespace:"calico-system", SelfLink:"", UID:"b07048e0-47ed-414d-b89a-27e90221643c", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"699d95d6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127", Pod:"whisker-699d95d6f6-9xqqx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali853f45b7891", MAC:"c2:da:5a:17:a1:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:03.138523 containerd[1515]: 2026-01-24 00:36:03.125 [INFO][3906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127" Namespace="calico-system" Pod="whisker-699d95d6f6-9xqqx" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--699d95d6f6--9xqqx-eth0" Jan 24 00:36:03.173693 containerd[1515]: time="2026-01-24T00:36:03.173142687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:03.173693 containerd[1515]: time="2026-01-24T00:36:03.173238484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:03.173693 containerd[1515]: time="2026-01-24T00:36:03.173282787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:03.175807 containerd[1515]: time="2026-01-24T00:36:03.175352832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:03.225180 systemd[1]: Started cri-containerd-5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127.scope - libcontainer container 5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127. Jan 24 00:36:03.315458 containerd[1515]: time="2026-01-24T00:36:03.315350091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-699d95d6f6-9xqqx,Uid:b07048e0-47ed-414d-b89a-27e90221643c,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b72e9bbf91784bdcc25dc9ee68a253f60b2f7a09c191e4f1af38525e1d28127\"" Jan 24 00:36:03.319762 containerd[1515]: time="2026-01-24T00:36:03.319415739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:36:03.472852 kubelet[2555]: I0124 00:36:03.472820 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:36:03.746976 containerd[1515]: time="2026-01-24T00:36:03.746740993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:03.749238 containerd[1515]: time="2026-01-24T00:36:03.749144824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:36:03.749312 containerd[1515]: time="2026-01-24T00:36:03.749259117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:36:03.749532 kubelet[2555]: E0124 00:36:03.749486 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:03.749601 kubelet[2555]: E0124 00:36:03.749546 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:03.754186 kubelet[2555]: E0124 00:36:03.754093 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b3c08fc22324961a4ad528b035b9863,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:03.757505 containerd[1515]: time="2026-01-24T00:36:03.757347180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:36:04.197563 containerd[1515]: time="2026-01-24T00:36:04.197464955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:04.199614 containerd[1515]: time="2026-01-24T00:36:04.199528595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:36:04.199698 containerd[1515]: time="2026-01-24T00:36:04.199660431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:36:04.200080 kubelet[2555]: E0124 00:36:04.199994 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:04.200080 kubelet[2555]: E0124 00:36:04.200060 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:04.202410 kubelet[2555]: E0124 00:36:04.202291 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:04.204884 kubelet[2555]: E0124 00:36:04.204573 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:36:04.277034 kubelet[2555]: I0124 00:36:04.276933 2555 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170e1ff7-7c24-4c14-911e-0eb52bdbb523" path="/var/lib/kubelet/pods/170e1ff7-7c24-4c14-911e-0eb52bdbb523/volumes" Jan 24 00:36:04.306171 systemd-networkd[1401]: cali853f45b7891: Gained IPv6LL Jan 24 00:36:04.480460 kubelet[2555]: E0124 00:36:04.480293 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:36:09.274973 containerd[1515]: time="2026-01-24T00:36:09.274193660Z" level=info msg="StopPodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\"" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.341 [INFO][4179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.341 [INFO][4179] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" iface="eth0" netns="/var/run/netns/cni-abfbc21e-9dcc-1466-71a6-8ab2aa01cb7f" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.344 [INFO][4179] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" iface="eth0" netns="/var/run/netns/cni-abfbc21e-9dcc-1466-71a6-8ab2aa01cb7f" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.345 [INFO][4179] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" iface="eth0" netns="/var/run/netns/cni-abfbc21e-9dcc-1466-71a6-8ab2aa01cb7f" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.345 [INFO][4179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.345 [INFO][4179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.380 [INFO][4188] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.381 [INFO][4188] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.381 [INFO][4188] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.390 [WARNING][4188] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.390 [INFO][4188] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.391 [INFO][4188] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:09.400194 containerd[1515]: 2026-01-24 00:36:09.395 [INFO][4179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:09.401244 containerd[1515]: time="2026-01-24T00:36:09.401087426Z" level=info msg="TearDown network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" successfully" Jan 24 00:36:09.401244 containerd[1515]: time="2026-01-24T00:36:09.401124575Z" level=info msg="StopPodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" returns successfully" Jan 24 00:36:09.405149 containerd[1515]: time="2026-01-24T00:36:09.405110836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6494d5bd79-znrpb,Uid:3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9,Namespace:calico-system,Attempt:1,}" Jan 24 00:36:09.406296 systemd[1]: run-netns-cni\x2dabfbc21e\x2d9dcc\x2d1466\x2d71a6\x2d8ab2aa01cb7f.mount: Deactivated successfully. Jan 24 00:36:09.576653 systemd-networkd[1401]: cali8cc7da45119: Link UP Jan 24 00:36:09.578307 systemd-networkd[1401]: cali8cc7da45119: Gained carrier Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.462 [INFO][4194] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.479 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0 calico-kube-controllers-6494d5bd79- calico-system 3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9 936 0 2026-01-24 00:35:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6494d5bd79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 calico-kube-controllers-6494d5bd79-znrpb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8cc7da45119 [] [] }} ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.479 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.522 [INFO][4207] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" HandleID="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.522 [INFO][4207] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" HandleID="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-56b1d28098", "pod":"calico-kube-controllers-6494d5bd79-znrpb", "timestamp":"2026-01-24 00:36:09.522361805 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.522 [INFO][4207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.522 [INFO][4207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.522 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.532 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.538 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.544 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.546 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.552 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.552 [INFO][4207] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.554 [INFO][4207] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406 Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.558 [INFO][4207] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.567 [INFO][4207] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.66/26] block=192.168.26.64/26 handle="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.567 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.66/26] handle="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.567 [INFO][4207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:09.605851 containerd[1515]: 2026-01-24 00:36:09.567 [INFO][4207] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.66/26] IPv6=[] ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" HandleID="k8s-pod-network.2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.606933 containerd[1515]: 2026-01-24 00:36:09.572 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0", GenerateName:"calico-kube-controllers-6494d5bd79-", Namespace:"calico-system", SelfLink:"", UID:"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6494d5bd79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"calico-kube-controllers-6494d5bd79-znrpb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8cc7da45119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:09.606933 containerd[1515]: 2026-01-24 00:36:09.572 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.66/32] ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.606933 containerd[1515]: 2026-01-24 00:36:09.572 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cc7da45119 ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.606933 containerd[1515]: 2026-01-24 00:36:09.578 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.606933 containerd[1515]: 2026-01-24 00:36:09.581 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0", GenerateName:"calico-kube-controllers-6494d5bd79-", Namespace:"calico-system", SelfLink:"", UID:"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6494d5bd79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406", Pod:"calico-kube-controllers-6494d5bd79-znrpb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8cc7da45119", MAC:"92:61:45:46:39:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:09.606933 containerd[1515]: 2026-01-24 00:36:09.596 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406" Namespace="calico-system" Pod="calico-kube-controllers-6494d5bd79-znrpb" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:09.645136 containerd[1515]: time="2026-01-24T00:36:09.643517386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:09.645136 containerd[1515]: time="2026-01-24T00:36:09.643590023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:09.645136 containerd[1515]: time="2026-01-24T00:36:09.643635593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:09.645136 containerd[1515]: time="2026-01-24T00:36:09.643789470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:09.690128 systemd[1]: Started cri-containerd-2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406.scope - libcontainer container 2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406. Jan 24 00:36:09.761401 containerd[1515]: time="2026-01-24T00:36:09.761325194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6494d5bd79-znrpb,Uid:3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406\"" Jan 24 00:36:09.769625 containerd[1515]: time="2026-01-24T00:36:09.769246914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:36:09.839544 kubelet[2555]: I0124 00:36:09.838310 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:36:10.218915 containerd[1515]: time="2026-01-24T00:36:10.218793926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:10.220879 containerd[1515]: time="2026-01-24T00:36:10.220632617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:36:10.220879 containerd[1515]: time="2026-01-24T00:36:10.220699722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:36:10.221018 kubelet[2555]: E0124 00:36:10.220847 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:10.221018 kubelet[2555]: E0124 00:36:10.220890 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:10.221427 kubelet[2555]: E0124 00:36:10.221385 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vctxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6494d5bd79-znrpb_calico-system(3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:10.223076 kubelet[2555]: E0124 00:36:10.222985 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:36:10.274049 containerd[1515]: time="2026-01-24T00:36:10.274009267Z" level=info msg="StopPodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\"" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.304 [INFO][4337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.305 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" iface="eth0" netns="/var/run/netns/cni-39cc8336-53f7-ecb5-c363-93fc927e00a2" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.306 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" iface="eth0" netns="/var/run/netns/cni-39cc8336-53f7-ecb5-c363-93fc927e00a2" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.306 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" iface="eth0" netns="/var/run/netns/cni-39cc8336-53f7-ecb5-c363-93fc927e00a2" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.306 [INFO][4337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.306 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.323 [INFO][4344] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.324 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.324 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.328 [WARNING][4344] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.328 [INFO][4344] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.329 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:10.333074 containerd[1515]: 2026-01-24 00:36:10.331 [INFO][4337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:10.333828 containerd[1515]: time="2026-01-24T00:36:10.333777435Z" level=info msg="TearDown network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" successfully" Jan 24 00:36:10.333828 containerd[1515]: time="2026-01-24T00:36:10.333803281Z" level=info msg="StopPodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" returns successfully" Jan 24 00:36:10.334450 containerd[1515]: time="2026-01-24T00:36:10.334425620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nznf8,Uid:445371de-a1b4-4071-8903-a5dc58d21e9e,Namespace:kube-system,Attempt:1,}" Jan 24 00:36:10.404656 systemd[1]: run-netns-cni\x2d39cc8336\x2d53f7\x2decb5\x2dc363\x2d93fc927e00a2.mount: Deactivated successfully. Jan 24 00:36:10.429590 systemd-networkd[1401]: calif1c891a3ac4: Link UP Jan 24 00:36:10.430131 systemd-networkd[1401]: calif1c891a3ac4: Gained carrier Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.362 [INFO][4350] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.371 [INFO][4350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0 coredns-674b8bbfcf- kube-system 445371de-a1b4-4071-8903-a5dc58d21e9e 948 0 2026-01-24 00:35:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 coredns-674b8bbfcf-nznf8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1c891a3ac4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.371 [INFO][4350] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.390 [INFO][4364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" HandleID="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.390 [INFO][4364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" HandleID="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f060), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-56b1d28098", "pod":"coredns-674b8bbfcf-nznf8", "timestamp":"2026-01-24 00:36:10.390420416 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.390 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.390 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.390 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.396 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.400 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.406 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.408 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.410 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.410 [INFO][4364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.414 [INFO][4364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.418 [INFO][4364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.421 [INFO][4364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.67/26] block=192.168.26.64/26 handle="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.421 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.67/26] handle="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.421 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:10.442454 containerd[1515]: 2026-01-24 00:36:10.421 [INFO][4364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.67/26] IPv6=[] ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" HandleID="k8s-pod-network.42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.442887 containerd[1515]: 2026-01-24 00:36:10.424 [INFO][4350] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"445371de-a1b4-4071-8903-a5dc58d21e9e", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"coredns-674b8bbfcf-nznf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1c891a3ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:10.442887 containerd[1515]: 2026-01-24 00:36:10.426 [INFO][4350] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.67/32] ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.442887 containerd[1515]: 2026-01-24 00:36:10.426 [INFO][4350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1c891a3ac4 ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.442887 containerd[1515]: 2026-01-24 00:36:10.429 [INFO][4350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.442887 containerd[1515]: 2026-01-24 00:36:10.430 [INFO][4350] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"445371de-a1b4-4071-8903-a5dc58d21e9e", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c", Pod:"coredns-674b8bbfcf-nznf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1c891a3ac4", MAC:"aa:0e:e0:ea:6e:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:10.442887 containerd[1515]: 2026-01-24 00:36:10.439 [INFO][4350] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-nznf8" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:10.458642 containerd[1515]: time="2026-01-24T00:36:10.458460163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:10.458642 containerd[1515]: time="2026-01-24T00:36:10.458520297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:10.458642 containerd[1515]: time="2026-01-24T00:36:10.458528578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:10.458642 containerd[1515]: time="2026-01-24T00:36:10.458601805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:10.481059 systemd[1]: Started cri-containerd-42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c.scope - libcontainer container 42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c. Jan 24 00:36:10.492635 kubelet[2555]: E0124 00:36:10.492388 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:36:10.522367 containerd[1515]: time="2026-01-24T00:36:10.522090615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nznf8,Uid:445371de-a1b4-4071-8903-a5dc58d21e9e,Namespace:kube-system,Attempt:1,} returns sandbox id \"42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c\"" Jan 24 00:36:10.526490 containerd[1515]: time="2026-01-24T00:36:10.526459361Z" level=info msg="CreateContainer within sandbox \"42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:36:10.540184 containerd[1515]: time="2026-01-24T00:36:10.540152242Z" level=info msg="CreateContainer within sandbox \"42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d63b06745d55c0c3c65c35e5ece2790d6fef226d8efa750cb6db03b65d215f16\"" Jan 24 00:36:10.541303 containerd[1515]: time="2026-01-24T00:36:10.541262560Z" level=info msg="StartContainer for \"d63b06745d55c0c3c65c35e5ece2790d6fef226d8efa750cb6db03b65d215f16\"" Jan 24 00:36:10.566059 systemd[1]: Started cri-containerd-d63b06745d55c0c3c65c35e5ece2790d6fef226d8efa750cb6db03b65d215f16.scope - libcontainer container d63b06745d55c0c3c65c35e5ece2790d6fef226d8efa750cb6db03b65d215f16. Jan 24 00:36:10.586181 containerd[1515]: time="2026-01-24T00:36:10.586134670Z" level=info msg="StartContainer for \"d63b06745d55c0c3c65c35e5ece2790d6fef226d8efa750cb6db03b65d215f16\" returns successfully" Jan 24 00:36:11.218327 systemd-networkd[1401]: cali8cc7da45119: Gained IPv6LL Jan 24 00:36:11.274219 containerd[1515]: time="2026-01-24T00:36:11.273682198Z" level=info msg="StopPodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\"" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.335 [INFO][4479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.336 [INFO][4479] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" iface="eth0" netns="/var/run/netns/cni-7b690a80-12ba-2cdd-856b-86f132222f0e" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.337 [INFO][4479] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" iface="eth0" netns="/var/run/netns/cni-7b690a80-12ba-2cdd-856b-86f132222f0e" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.337 [INFO][4479] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" iface="eth0" netns="/var/run/netns/cni-7b690a80-12ba-2cdd-856b-86f132222f0e" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.337 [INFO][4479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.337 [INFO][4479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.356 [INFO][4491] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.356 [INFO][4491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.356 [INFO][4491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.361 [WARNING][4491] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.361 [INFO][4491] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.362 [INFO][4491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:11.366703 containerd[1515]: 2026-01-24 00:36:11.364 [INFO][4479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:11.367591 containerd[1515]: time="2026-01-24T00:36:11.367000724Z" level=info msg="TearDown network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" successfully" Jan 24 00:36:11.367591 containerd[1515]: time="2026-01-24T00:36:11.367021829Z" level=info msg="StopPodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" returns successfully" Jan 24 00:36:11.368302 containerd[1515]: time="2026-01-24T00:36:11.368253085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-zh62f,Uid:88376c0e-7993-4786-9815-0474220bc333,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:36:11.405835 systemd[1]: run-netns-cni\x2d7b690a80\x2d12ba\x2d2cdd\x2d856b\x2d86f132222f0e.mount: Deactivated successfully. Jan 24 00:36:11.473861 systemd-networkd[1401]: calie6a3f8a21d2: Link UP Jan 24 00:36:11.474359 systemd-networkd[1401]: calie6a3f8a21d2: Gained carrier Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.409 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.419 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0 calico-apiserver-79c764d8b9- calico-apiserver 88376c0e-7993-4786-9815-0474220bc333 960 0 2026-01-24 00:35:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c764d8b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 calico-apiserver-79c764d8b9-zh62f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie6a3f8a21d2 [] [] }} ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.419 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.442 [INFO][4514] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" HandleID="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.443 [INFO][4514] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" HandleID="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-56b1d28098", "pod":"calico-apiserver-79c764d8b9-zh62f", "timestamp":"2026-01-24 00:36:11.442966222 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.443 [INFO][4514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.443 [INFO][4514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.443 [INFO][4514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.447 [INFO][4514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.451 [INFO][4514] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.454 [INFO][4514] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.456 [INFO][4514] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.457 [INFO][4514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.457 [INFO][4514] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.458 [INFO][4514] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.462 [INFO][4514] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.467 [INFO][4514] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.68/26] block=192.168.26.64/26 handle="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.467 [INFO][4514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.68/26] handle="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.467 [INFO][4514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:11.486386 containerd[1515]: 2026-01-24 00:36:11.467 [INFO][4514] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.68/26] IPv6=[] ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" HandleID="k8s-pod-network.8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.486815 containerd[1515]: 2026-01-24 00:36:11.470 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88376c0e-7993-4786-9815-0474220bc333", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"calico-apiserver-79c764d8b9-zh62f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a3f8a21d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:11.486815 containerd[1515]: 2026-01-24 00:36:11.470 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.68/32] ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.486815 containerd[1515]: 2026-01-24 00:36:11.470 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6a3f8a21d2 ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.486815 containerd[1515]: 2026-01-24 00:36:11.473 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.486815 containerd[1515]: 2026-01-24 00:36:11.474 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88376c0e-7993-4786-9815-0474220bc333", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e", Pod:"calico-apiserver-79c764d8b9-zh62f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a3f8a21d2", MAC:"ca:5b:71:5d:a1:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:11.486815 containerd[1515]: 2026-01-24 00:36:11.483 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-zh62f" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:11.496053 kubelet[2555]: E0124 00:36:11.495482 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:36:11.516894 containerd[1515]: time="2026-01-24T00:36:11.516786736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:11.516894 containerd[1515]: time="2026-01-24T00:36:11.516838997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:11.516894 containerd[1515]: time="2026-01-24T00:36:11.516849560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:11.518246 containerd[1515]: time="2026-01-24T00:36:11.518075375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:11.525544 kubelet[2555]: I0124 00:36:11.525497 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nznf8" podStartSLOduration=39.525481349 podStartE2EDuration="39.525481349s" podCreationTimestamp="2026-01-24 00:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:36:11.508561356 +0000 UTC m=+45.434712272" watchObservedRunningTime="2026-01-24 00:36:11.525481349 +0000 UTC m=+45.451632265" Jan 24 00:36:11.544832 systemd[1]: Started cri-containerd-8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e.scope - libcontainer container 8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e. Jan 24 00:36:11.587854 containerd[1515]: time="2026-01-24T00:36:11.587750341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-zh62f,Uid:88376c0e-7993-4786-9815-0474220bc333,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e\"" Jan 24 00:36:11.590336 containerd[1515]: time="2026-01-24T00:36:11.590177788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:11.900546 kubelet[2555]: I0124 00:36:11.899772 2555 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:36:12.035619 containerd[1515]: time="2026-01-24T00:36:12.035560821Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:12.038037 containerd[1515]: time="2026-01-24T00:36:12.037832717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:12.038037 containerd[1515]: time="2026-01-24T00:36:12.037969866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:12.038270 kubelet[2555]: E0124 00:36:12.038215 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:12.038351 kubelet[2555]: E0124 00:36:12.038287 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:12.038607 kubelet[2555]: E0124 00:36:12.038531 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmf64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-zh62f_calico-apiserver(88376c0e-7993-4786-9815-0474220bc333): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:12.039772 kubelet[2555]: E0124 00:36:12.039698 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:36:12.242336 systemd-networkd[1401]: calif1c891a3ac4: Gained IPv6LL Jan 24 00:36:12.278554 containerd[1515]: time="2026-01-24T00:36:12.275816450Z" level=info msg="StopPodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\"" Jan 24 00:36:12.278554 containerd[1515]: time="2026-01-24T00:36:12.276472167Z" level=info msg="StopPodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\"" Jan 24 00:36:12.279664 containerd[1515]: time="2026-01-24T00:36:12.279560646Z" level=info msg="StopPodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\"" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.367 [INFO][4596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.368 [INFO][4596] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" iface="eth0" netns="/var/run/netns/cni-798eb8de-8e6b-1399-d736-8f8aa5afd0ed" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.371 [INFO][4596] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" iface="eth0" netns="/var/run/netns/cni-798eb8de-8e6b-1399-d736-8f8aa5afd0ed" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.372 [INFO][4596] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" iface="eth0" netns="/var/run/netns/cni-798eb8de-8e6b-1399-d736-8f8aa5afd0ed" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.372 [INFO][4596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.372 [INFO][4596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.429 [INFO][4621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.429 [INFO][4621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.429 [INFO][4621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.435 [WARNING][4621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.435 [INFO][4621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.439 [INFO][4621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:12.444473 containerd[1515]: 2026-01-24 00:36:12.440 [INFO][4596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:12.449915 systemd[1]: run-netns-cni\x2d798eb8de\x2d8e6b\x2d1399\x2dd736\x2d8f8aa5afd0ed.mount: Deactivated successfully. Jan 24 00:36:12.450975 containerd[1515]: time="2026-01-24T00:36:12.450354190Z" level=info msg="TearDown network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" successfully" Jan 24 00:36:12.450975 containerd[1515]: time="2026-01-24T00:36:12.450390068Z" level=info msg="StopPodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" returns successfully" Jan 24 00:36:12.453184 containerd[1515]: time="2026-01-24T00:36:12.453168710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v8smt,Uid:639522bb-4ded-4c6d-8204-2dc920251ed9,Namespace:calico-system,Attempt:1,}" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.404 [INFO][4597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.405 [INFO][4597] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" iface="eth0" netns="/var/run/netns/cni-ef623cc2-61c7-f9a5-01be-dad5abfaec71" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.406 [INFO][4597] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" iface="eth0" netns="/var/run/netns/cni-ef623cc2-61c7-f9a5-01be-dad5abfaec71" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.407 [INFO][4597] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" iface="eth0" netns="/var/run/netns/cni-ef623cc2-61c7-f9a5-01be-dad5abfaec71" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.407 [INFO][4597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.407 [INFO][4597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.448 [INFO][4630] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.452 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.452 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.458 [WARNING][4630] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.458 [INFO][4630] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.460 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:12.468285 containerd[1515]: 2026-01-24 00:36:12.464 [INFO][4597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:12.468285 containerd[1515]: time="2026-01-24T00:36:12.468167170Z" level=info msg="TearDown network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" successfully" Jan 24 00:36:12.468285 containerd[1515]: time="2026-01-24T00:36:12.468194445Z" level=info msg="StopPodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" returns successfully" Jan 24 00:36:12.472387 systemd[1]: run-netns-cni\x2def623cc2\x2d61c7\x2df9a5\x2d01be\x2ddad5abfaec71.mount: Deactivated successfully. Jan 24 00:36:12.473341 containerd[1515]: time="2026-01-24T00:36:12.472388206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njp75,Uid:641bc171-0396-4a65-b184-ec8db27324ea,Namespace:calico-system,Attempt:1,}" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.420 [INFO][4607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.420 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" iface="eth0" netns="/var/run/netns/cni-22cc67da-dbf6-2ebe-e70e-b9151db76544" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.420 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" iface="eth0" netns="/var/run/netns/cni-22cc67da-dbf6-2ebe-e70e-b9151db76544" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.421 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" iface="eth0" netns="/var/run/netns/cni-22cc67da-dbf6-2ebe-e70e-b9151db76544" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.421 [INFO][4607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.421 [INFO][4607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.474 [INFO][4635] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.474 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.474 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.489 [WARNING][4635] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.489 [INFO][4635] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.491 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:12.500234 containerd[1515]: 2026-01-24 00:36:12.493 [INFO][4607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:12.502557 containerd[1515]: time="2026-01-24T00:36:12.500722827Z" level=info msg="TearDown network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" successfully" Jan 24 00:36:12.502557 containerd[1515]: time="2026-01-24T00:36:12.500743851Z" level=info msg="StopPodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" returns successfully" Jan 24 00:36:12.504133 containerd[1515]: time="2026-01-24T00:36:12.504081181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wtrng,Uid:f50723bb-0fb2-4f3f-b014-3c7c00d05077,Namespace:kube-system,Attempt:1,}" Jan 24 00:36:12.505100 systemd[1]: run-netns-cni\x2d22cc67da\x2ddbf6\x2d2ebe\x2de70e\x2db9151db76544.mount: Deactivated successfully. Jan 24 00:36:12.527203 kubelet[2555]: E0124 00:36:12.527057 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:36:12.670060 systemd-networkd[1401]: calif61e5c8d996: Link UP Jan 24 00:36:12.670248 systemd-networkd[1401]: calif61e5c8d996: Gained carrier Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.533 [INFO][4657] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.560 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0 goldmane-666569f655- calico-system 639522bb-4ded-4c6d-8204-2dc920251ed9 989 0 2026-01-24 00:35:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 goldmane-666569f655-v8smt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif61e5c8d996 [] [] }} ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.560 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.620 [INFO][4694] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" HandleID="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.620 [INFO][4694] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" HandleID="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-56b1d28098", "pod":"goldmane-666569f655-v8smt", "timestamp":"2026-01-24 00:36:12.620745569 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.620 [INFO][4694] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.620 [INFO][4694] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.620 [INFO][4694] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.629 [INFO][4694] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.636 [INFO][4694] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.640 [INFO][4694] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.642 [INFO][4694] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.646 [INFO][4694] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.646 [INFO][4694] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.650 [INFO][4694] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2 Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.654 [INFO][4694] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.661 [INFO][4694] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.69/26] block=192.168.26.64/26 handle="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.661 [INFO][4694] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.69/26] handle="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.661 [INFO][4694] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:12.681721 containerd[1515]: 2026-01-24 00:36:12.661 [INFO][4694] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.69/26] IPv6=[] ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" HandleID="k8s-pod-network.0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.682677 containerd[1515]: 2026-01-24 00:36:12.664 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"639522bb-4ded-4c6d-8204-2dc920251ed9", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"goldmane-666569f655-v8smt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif61e5c8d996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:12.682677 containerd[1515]: 2026-01-24 00:36:12.664 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.69/32] ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.682677 containerd[1515]: 2026-01-24 00:36:12.664 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif61e5c8d996 ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.682677 containerd[1515]: 2026-01-24 00:36:12.666 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.682677 containerd[1515]: 2026-01-24 00:36:12.666 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"639522bb-4ded-4c6d-8204-2dc920251ed9", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2", Pod:"goldmane-666569f655-v8smt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif61e5c8d996", MAC:"86:43:2a:61:ab:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:12.682677 containerd[1515]: 2026-01-24 00:36:12.679 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2" Namespace="calico-system" Pod="goldmane-666569f655-v8smt" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:12.708726 containerd[1515]: time="2026-01-24T00:36:12.708411578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:12.708726 containerd[1515]: time="2026-01-24T00:36:12.708458237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:12.708726 containerd[1515]: time="2026-01-24T00:36:12.708468049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:12.708726 containerd[1515]: time="2026-01-24T00:36:12.708544685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:12.737071 systemd[1]: Started cri-containerd-0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2.scope - libcontainer container 0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2. Jan 24 00:36:12.775427 systemd-networkd[1401]: calic74878a7719: Link UP Jan 24 00:36:12.775568 systemd-networkd[1401]: calic74878a7719: Gained carrier Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.571 [INFO][4666] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.587 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0 csi-node-driver- calico-system 641bc171-0396-4a65-b184-ec8db27324ea 990 0 2026-01-24 00:35:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 csi-node-driver-njp75 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic74878a7719 [] [] }} ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.588 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.628 [INFO][4707] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" HandleID="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.635 [INFO][4707] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" HandleID="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-56b1d28098", "pod":"csi-node-driver-njp75", "timestamp":"2026-01-24 00:36:12.628831067 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.635 [INFO][4707] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.661 [INFO][4707] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.661 [INFO][4707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.728 [INFO][4707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.738 [INFO][4707] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.743 [INFO][4707] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.745 [INFO][4707] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.750 [INFO][4707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.751 [INFO][4707] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.753 [INFO][4707] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994 Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.758 [INFO][4707] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.765 [INFO][4707] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.70/26] block=192.168.26.64/26 handle="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.765 [INFO][4707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.70/26] handle="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.765 [INFO][4707] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:12.787476 containerd[1515]: 2026-01-24 00:36:12.765 [INFO][4707] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.70/26] IPv6=[] ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" HandleID="k8s-pod-network.898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.787975 containerd[1515]: 2026-01-24 00:36:12.769 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"641bc171-0396-4a65-b184-ec8db27324ea", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"csi-node-driver-njp75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic74878a7719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:12.787975 containerd[1515]: 2026-01-24 00:36:12.769 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.70/32] ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.787975 containerd[1515]: 2026-01-24 00:36:12.769 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic74878a7719 ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.787975 containerd[1515]: 2026-01-24 00:36:12.777 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.787975 containerd[1515]: 2026-01-24 00:36:12.777 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"641bc171-0396-4a65-b184-ec8db27324ea", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994", Pod:"csi-node-driver-njp75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic74878a7719", MAC:"ee:8a:e4:64:35:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:12.787975 containerd[1515]: 2026-01-24 00:36:12.785 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994" Namespace="calico-system" Pod="csi-node-driver-njp75" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:12.804843 containerd[1515]: time="2026-01-24T00:36:12.804418707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v8smt,Uid:639522bb-4ded-4c6d-8204-2dc920251ed9,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2\"" Jan 24 00:36:12.808446 containerd[1515]: time="2026-01-24T00:36:12.808383399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:36:12.814826 containerd[1515]: time="2026-01-24T00:36:12.814455954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:12.814826 containerd[1515]: time="2026-01-24T00:36:12.814525799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:12.814826 containerd[1515]: time="2026-01-24T00:36:12.814535971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:12.814826 containerd[1515]: time="2026-01-24T00:36:12.814742885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:12.835039 systemd[1]: Started cri-containerd-898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994.scope - libcontainer container 898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994. Jan 24 00:36:12.882089 systemd-networkd[1401]: cali99b7d76b661: Link UP Jan 24 00:36:12.882891 systemd-networkd[1401]: cali99b7d76b661: Gained carrier Jan 24 00:36:12.897612 containerd[1515]: time="2026-01-24T00:36:12.897159711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njp75,Uid:641bc171-0396-4a65-b184-ec8db27324ea,Namespace:calico-system,Attempt:1,} returns sandbox id \"898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994\"" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.568 [INFO][4681] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.581 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0 coredns-674b8bbfcf- kube-system f50723bb-0fb2-4f3f-b014-3c7c00d05077 991 0 2026-01-24 00:35:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 coredns-674b8bbfcf-wtrng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali99b7d76b661 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.581 [INFO][4681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.645 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" HandleID="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.646 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" HandleID="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-56b1d28098", "pod":"coredns-674b8bbfcf-wtrng", "timestamp":"2026-01-24 00:36:12.645912504 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.646 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.765 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.765 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.828 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.838 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.844 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.850 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.853 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.854 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.855 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.860 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.869 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.71/26] block=192.168.26.64/26 handle="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.870 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.71/26] handle="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.870 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:12.902953 containerd[1515]: 2026-01-24 00:36:12.870 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.71/26] IPv6=[] ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" HandleID="k8s-pod-network.a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.904089 containerd[1515]: 2026-01-24 00:36:12.877 [INFO][4681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f50723bb-0fb2-4f3f-b014-3c7c00d05077", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"coredns-674b8bbfcf-wtrng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99b7d76b661", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:12.904089 containerd[1515]: 2026-01-24 00:36:12.877 [INFO][4681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.71/32] ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.904089 containerd[1515]: 2026-01-24 00:36:12.877 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99b7d76b661 ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.904089 containerd[1515]: 2026-01-24 00:36:12.885 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.904089 containerd[1515]: 2026-01-24 00:36:12.887 [INFO][4681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f50723bb-0fb2-4f3f-b014-3c7c00d05077", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f", Pod:"coredns-674b8bbfcf-wtrng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99b7d76b661", MAC:"ee:d7:a1:f6:9d:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:12.904089 containerd[1515]: 2026-01-24 00:36:12.899 [INFO][4681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f" Namespace="kube-system" Pod="coredns-674b8bbfcf-wtrng" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:12.931760 containerd[1515]: time="2026-01-24T00:36:12.931658015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:12.933881 containerd[1515]: time="2026-01-24T00:36:12.933449592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:12.933881 containerd[1515]: time="2026-01-24T00:36:12.933466995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:12.933881 containerd[1515]: time="2026-01-24T00:36:12.933534559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:12.955049 systemd[1]: Started cri-containerd-a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f.scope - libcontainer container a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f. Jan 24 00:36:13.001833 containerd[1515]: time="2026-01-24T00:36:13.001801069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wtrng,Uid:f50723bb-0fb2-4f3f-b014-3c7c00d05077,Namespace:kube-system,Attempt:1,} returns sandbox id \"a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f\"" Jan 24 00:36:13.010392 containerd[1515]: time="2026-01-24T00:36:13.010361564Z" level=info msg="CreateContainer within sandbox \"a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:36:13.023537 containerd[1515]: time="2026-01-24T00:36:13.023503583Z" level=info msg="CreateContainer within sandbox \"a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"797ffd4d4064d4ce4337d3990aadca9b1b8b26ece910159ce674ed60f3022303\"" Jan 24 00:36:13.024336 containerd[1515]: time="2026-01-24T00:36:13.024114137Z" level=info msg="StartContainer for \"797ffd4d4064d4ce4337d3990aadca9b1b8b26ece910159ce674ed60f3022303\"" Jan 24 00:36:13.049195 systemd[1]: Started cri-containerd-797ffd4d4064d4ce4337d3990aadca9b1b8b26ece910159ce674ed60f3022303.scope - libcontainer container 797ffd4d4064d4ce4337d3990aadca9b1b8b26ece910159ce674ed60f3022303. Jan 24 00:36:13.074665 containerd[1515]: time="2026-01-24T00:36:13.074213430Z" level=info msg="StartContainer for \"797ffd4d4064d4ce4337d3990aadca9b1b8b26ece910159ce674ed60f3022303\" returns successfully" Jan 24 00:36:13.076101 kernel: bpftool[4931]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:36:13.238239 containerd[1515]: time="2026-01-24T00:36:13.238013590Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:13.239321 containerd[1515]: time="2026-01-24T00:36:13.239168095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:36:13.239321 containerd[1515]: time="2026-01-24T00:36:13.239262094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:13.239544 kubelet[2555]: E0124 00:36:13.239480 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:13.239663 kubelet[2555]: E0124 00:36:13.239537 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:13.239895 kubelet[2555]: E0124 00:36:13.239810 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrslm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v8smt_calico-system(639522bb-4ded-4c6d-8204-2dc920251ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:13.240705 containerd[1515]: time="2026-01-24T00:36:13.240477672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:36:13.241229 kubelet[2555]: E0124 00:36:13.241181 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:36:13.274281 containerd[1515]: time="2026-01-24T00:36:13.274211899Z" level=info msg="StopPodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\"" Jan 24 00:36:13.362301 systemd-networkd[1401]: vxlan.calico: Link UP Jan 24 00:36:13.362317 systemd-networkd[1401]: vxlan.calico: Gained carrier Jan 24 00:36:13.399102 systemd-networkd[1401]: calie6a3f8a21d2: Gained IPv6LL Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.402 [INFO][4960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.403 [INFO][4960] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" iface="eth0" netns="/var/run/netns/cni-020b8a16-4ee4-5d98-e967-ab9934cf439f" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.403 [INFO][4960] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" iface="eth0" netns="/var/run/netns/cni-020b8a16-4ee4-5d98-e967-ab9934cf439f" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.404 [INFO][4960] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" iface="eth0" netns="/var/run/netns/cni-020b8a16-4ee4-5d98-e967-ab9934cf439f" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.404 [INFO][4960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.404 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.450 [INFO][4993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.451 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.451 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.458 [WARNING][4993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.458 [INFO][4993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.459 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:13.469223 containerd[1515]: 2026-01-24 00:36:13.466 [INFO][4960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:13.470012 containerd[1515]: time="2026-01-24T00:36:13.469989068Z" level=info msg="TearDown network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" successfully" Jan 24 00:36:13.470081 containerd[1515]: time="2026-01-24T00:36:13.470068394Z" level=info msg="StopPodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" returns successfully" Jan 24 00:36:13.471800 containerd[1515]: time="2026-01-24T00:36:13.470578328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-6vp5z,Uid:f4761a40-d4c5-46a6-ba7c-5af41f9766d5,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:36:13.473489 systemd[1]: run-netns-cni\x2d020b8a16\x2d4ee4\x2d5d98\x2de967\x2dab9934cf439f.mount: Deactivated successfully. Jan 24 00:36:13.539532 kubelet[2555]: E0124 00:36:13.539284 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:36:13.539909 kubelet[2555]: E0124 00:36:13.539823 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:36:13.576819 kubelet[2555]: I0124 00:36:13.573328 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wtrng" podStartSLOduration=41.573315251 podStartE2EDuration="41.573315251s" podCreationTimestamp="2026-01-24 00:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:36:13.548576558 +0000 UTC m=+47.474727474" watchObservedRunningTime="2026-01-24 00:36:13.573315251 +0000 UTC m=+47.499466167" Jan 24 00:36:13.666479 systemd-networkd[1401]: cali115deebefb4: Link UP Jan 24 00:36:13.668438 systemd-networkd[1401]: cali115deebefb4: Gained carrier Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.542 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0 calico-apiserver-79c764d8b9- calico-apiserver f4761a40-d4c5-46a6-ba7c-5af41f9766d5 1015 0 2026-01-24 00:35:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c764d8b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-56b1d28098 calico-apiserver-79c764d8b9-6vp5z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali115deebefb4 [] [] }} ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.542 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.596 [INFO][5012] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" HandleID="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.597 [INFO][5012] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" HandleID="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-56b1d28098", "pod":"calico-apiserver-79c764d8b9-6vp5z", "timestamp":"2026-01-24 00:36:13.596621262 +0000 UTC"}, Hostname:"ci-4081-3-6-n-56b1d28098", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.597 [INFO][5012] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.597 [INFO][5012] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.597 [INFO][5012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-56b1d28098' Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.611 [INFO][5012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.624 [INFO][5012] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.631 [INFO][5012] ipam/ipam.go 511: Trying affinity for 192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.633 [INFO][5012] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.639 [INFO][5012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.64/26 host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.639 [INFO][5012] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.26.64/26 handle="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.640 [INFO][5012] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375 Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.648 [INFO][5012] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.26.64/26 handle="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.655 [INFO][5012] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.26.72/26] block=192.168.26.64/26 handle="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.655 [INFO][5012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.72/26] handle="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" host="ci-4081-3-6-n-56b1d28098" Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.655 [INFO][5012] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:13.689120 containerd[1515]: 2026-01-24 00:36:13.655 [INFO][5012] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.26.72/26] IPv6=[] ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" HandleID="k8s-pod-network.0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.689587 containerd[1515]: 2026-01-24 00:36:13.659 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4761a40-d4c5-46a6-ba7c-5af41f9766d5", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"", Pod:"calico-apiserver-79c764d8b9-6vp5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali115deebefb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:13.689587 containerd[1515]: 2026-01-24 00:36:13.659 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.72/32] ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.689587 containerd[1515]: 2026-01-24 00:36:13.659 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali115deebefb4 ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.689587 containerd[1515]: 2026-01-24 00:36:13.668 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.689587 containerd[1515]: 2026-01-24 00:36:13.668 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4761a40-d4c5-46a6-ba7c-5af41f9766d5", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375", Pod:"calico-apiserver-79c764d8b9-6vp5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali115deebefb4", MAC:"5e:25:13:74:8a:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:13.689587 containerd[1515]: 2026-01-24 00:36:13.685 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375" Namespace="calico-apiserver" Pod="calico-apiserver-79c764d8b9-6vp5z" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:13.692943 containerd[1515]: time="2026-01-24T00:36:13.692907369Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:13.695012 containerd[1515]: time="2026-01-24T00:36:13.694984442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:36:13.695345 containerd[1515]: time="2026-01-24T00:36:13.695271821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:36:13.695460 kubelet[2555]: E0124 00:36:13.695432 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:13.695506 kubelet[2555]: E0124 00:36:13.695473 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:13.697969 kubelet[2555]: E0124 00:36:13.696383 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:13.699196 containerd[1515]: time="2026-01-24T00:36:13.699176066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:36:13.714820 systemd-networkd[1401]: calif61e5c8d996: Gained IPv6LL Jan 24 00:36:13.720864 containerd[1515]: time="2026-01-24T00:36:13.720776660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:36:13.722003 containerd[1515]: time="2026-01-24T00:36:13.721489846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:36:13.722070 containerd[1515]: time="2026-01-24T00:36:13.722033527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:13.722213 containerd[1515]: time="2026-01-24T00:36:13.722160082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:36:13.749295 systemd[1]: Started cri-containerd-0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375.scope - libcontainer container 0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375. Jan 24 00:36:13.797704 containerd[1515]: time="2026-01-24T00:36:13.797675456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c764d8b9-6vp5z,Uid:f4761a40-d4c5-46a6-ba7c-5af41f9766d5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375\"" Jan 24 00:36:13.842534 systemd-networkd[1401]: calic74878a7719: Gained IPv6LL Jan 24 00:36:14.136548 containerd[1515]: time="2026-01-24T00:36:14.136485562Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:14.138441 containerd[1515]: time="2026-01-24T00:36:14.138272976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:36:14.138441 containerd[1515]: time="2026-01-24T00:36:14.138342060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:36:14.138568 kubelet[2555]: E0124 00:36:14.138529 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:14.138624 kubelet[2555]: E0124 00:36:14.138584 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:14.140449 kubelet[2555]: E0124 00:36:14.138809 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:14.140449 kubelet[2555]: E0124 00:36:14.140318 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:36:14.140753 containerd[1515]: time="2026-01-24T00:36:14.139057051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:14.227469 systemd-networkd[1401]: cali99b7d76b661: Gained IPv6LL Jan 24 00:36:14.548415 kubelet[2555]: E0124 00:36:14.547487 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:36:14.550450 kubelet[2555]: E0124 00:36:14.549910 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:36:14.738142 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Jan 24 00:36:14.750219 containerd[1515]: time="2026-01-24T00:36:14.750162010Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:14.752314 containerd[1515]: time="2026-01-24T00:36:14.751805444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:14.752314 containerd[1515]: time="2026-01-24T00:36:14.751838701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:14.752402 kubelet[2555]: E0124 00:36:14.752073 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:14.752402 kubelet[2555]: E0124 00:36:14.752117 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:14.752402 kubelet[2555]: E0124 00:36:14.752253 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdr4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-6vp5z_calico-apiserver(f4761a40-d4c5-46a6-ba7c-5af41f9766d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:14.753427 kubelet[2555]: E0124 00:36:14.753381 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:36:15.546198 kubelet[2555]: E0124 00:36:15.546141 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:36:15.634297 systemd-networkd[1401]: cali115deebefb4: Gained IPv6LL Jan 24 00:36:18.278104 containerd[1515]: time="2026-01-24T00:36:18.277693564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:36:18.728369 containerd[1515]: time="2026-01-24T00:36:18.728293431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:18.730261 containerd[1515]: time="2026-01-24T00:36:18.730146581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:36:18.730666 containerd[1515]: time="2026-01-24T00:36:18.730543162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:36:18.731160 kubelet[2555]: E0124 00:36:18.731009 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:18.731160 kubelet[2555]: E0124 00:36:18.731080 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:18.732970 kubelet[2555]: E0124 00:36:18.731358 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b3c08fc22324961a4ad528b035b9863,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:18.735442 containerd[1515]: time="2026-01-24T00:36:18.735402570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:36:19.166290 containerd[1515]: time="2026-01-24T00:36:19.166113727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:19.168105 containerd[1515]: time="2026-01-24T00:36:19.168042174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:36:19.168105 containerd[1515]: time="2026-01-24T00:36:19.168166485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:36:19.168399 kubelet[2555]: E0124 00:36:19.168331 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:19.168501 kubelet[2555]: E0124 00:36:19.168400 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:19.168593 kubelet[2555]: E0124 00:36:19.168543 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:19.170299 kubelet[2555]: E0124 00:36:19.170207 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:36:25.274462 containerd[1515]: time="2026-01-24T00:36:25.274350274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:36:25.706072 containerd[1515]: time="2026-01-24T00:36:25.705997370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:25.707900 containerd[1515]: time="2026-01-24T00:36:25.707851127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:36:25.708039 containerd[1515]: time="2026-01-24T00:36:25.707947682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:36:25.708199 kubelet[2555]: E0124 00:36:25.708133 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:25.708199 kubelet[2555]: E0124 00:36:25.708182 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:25.708764 kubelet[2555]: E0124 00:36:25.708389 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vctxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6494d5bd79-znrpb_calico-system(3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:25.709915 kubelet[2555]: E0124 00:36:25.709477 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:36:25.710082 containerd[1515]: time="2026-01-24T00:36:25.709111021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:36:26.157163 containerd[1515]: time="2026-01-24T00:36:26.156909245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:26.158920 containerd[1515]: time="2026-01-24T00:36:26.158760416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:36:26.158920 containerd[1515]: time="2026-01-24T00:36:26.158827886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:26.159073 kubelet[2555]: E0124 00:36:26.159021 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:26.159183 kubelet[2555]: E0124 00:36:26.159075 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:26.159368 kubelet[2555]: E0124 00:36:26.159264 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrslm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v8smt_calico-system(639522bb-4ded-4c6d-8204-2dc920251ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:26.160931 kubelet[2555]: E0124 00:36:26.160859 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:36:26.228655 containerd[1515]: time="2026-01-24T00:36:26.228606693Z" level=info msg="StopPodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\"" Jan 24 00:36:26.281480 containerd[1515]: time="2026-01-24T00:36:26.278297256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.300 [WARNING][5145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"639522bb-4ded-4c6d-8204-2dc920251ed9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2", Pod:"goldmane-666569f655-v8smt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif61e5c8d996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.301 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.301 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" iface="eth0" netns="" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.301 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.301 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.353 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.354 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.354 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.361 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.361 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.363 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.369266 containerd[1515]: 2026-01-24 00:36:26.366 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.370456 containerd[1515]: time="2026-01-24T00:36:26.369301211Z" level=info msg="TearDown network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" successfully" Jan 24 00:36:26.370456 containerd[1515]: time="2026-01-24T00:36:26.369329675Z" level=info msg="StopPodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" returns successfully" Jan 24 00:36:26.370792 containerd[1515]: time="2026-01-24T00:36:26.370741649Z" level=info msg="RemovePodSandbox for \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\"" Jan 24 00:36:26.370881 containerd[1515]: time="2026-01-24T00:36:26.370790386Z" level=info msg="Forcibly stopping sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\"" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.440 [WARNING][5170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"639522bb-4ded-4c6d-8204-2dc920251ed9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"0c73c65da9cb1d619c066761aa5a3f3e8633aa01f4275f76e37669ca798c46c2", Pod:"goldmane-666569f655-v8smt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif61e5c8d996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.441 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.441 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" iface="eth0" netns="" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.441 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.441 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.475 [INFO][5177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.476 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.476 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.487 [WARNING][5177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.488 [INFO][5177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" HandleID="k8s-pod-network.9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Workload="ci--4081--3--6--n--56b1d28098-k8s-goldmane--666569f655--v8smt-eth0" Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.490 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.499357 containerd[1515]: 2026-01-24 00:36:26.494 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f" Jan 24 00:36:26.499357 containerd[1515]: time="2026-01-24T00:36:26.499042048Z" level=info msg="TearDown network for sandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" successfully" Jan 24 00:36:26.505838 containerd[1515]: time="2026-01-24T00:36:26.505764247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:26.506641 containerd[1515]: time="2026-01-24T00:36:26.506032817Z" level=info msg="RemovePodSandbox \"9775b285bee3f5ad8793bb160d951079b2972e74e474a5b320c6f426919ddf5f\" returns successfully" Jan 24 00:36:26.507166 containerd[1515]: time="2026-01-24T00:36:26.507140295Z" level=info msg="StopPodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\"" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.555 [WARNING][5192] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"445371de-a1b4-4071-8903-a5dc58d21e9e", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c", Pod:"coredns-674b8bbfcf-nznf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1c891a3ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.555 [INFO][5192] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.555 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" iface="eth0" netns="" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.555 [INFO][5192] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.556 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.580 [INFO][5200] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.580 [INFO][5200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.581 [INFO][5200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.594 [WARNING][5200] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.594 [INFO][5200] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.596 [INFO][5200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.607916 containerd[1515]: 2026-01-24 00:36:26.604 [INFO][5192] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.607916 containerd[1515]: time="2026-01-24T00:36:26.607832369Z" level=info msg="TearDown network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" successfully" Jan 24 00:36:26.607916 containerd[1515]: time="2026-01-24T00:36:26.607861213Z" level=info msg="StopPodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" returns successfully" Jan 24 00:36:26.609005 containerd[1515]: time="2026-01-24T00:36:26.608454763Z" level=info msg="RemovePodSandbox for \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\"" Jan 24 00:36:26.609005 containerd[1515]: time="2026-01-24T00:36:26.608481157Z" level=info msg="Forcibly stopping sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\"" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.667 [WARNING][5215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"445371de-a1b4-4071-8903-a5dc58d21e9e", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"42652d4e065127d08e84cf37df0533ce22bbe9ab3c6d5d5b5ca9f7f95d3c2a5c", Pod:"coredns-674b8bbfcf-nznf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1c891a3ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.669 [INFO][5215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.669 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" iface="eth0" netns="" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.669 [INFO][5215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.669 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.705 [INFO][5222] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.705 [INFO][5222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.705 [INFO][5222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.712 [WARNING][5222] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.712 [INFO][5222] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" HandleID="k8s-pod-network.0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--nznf8-eth0" Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.714 [INFO][5222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.719173 containerd[1515]: 2026-01-24 00:36:26.716 [INFO][5215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539" Jan 24 00:36:26.719173 containerd[1515]: time="2026-01-24T00:36:26.718740221Z" level=info msg="TearDown network for sandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" successfully" Jan 24 00:36:26.722869 containerd[1515]: time="2026-01-24T00:36:26.722840633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:26.722912 containerd[1515]: time="2026-01-24T00:36:26.722878958Z" level=info msg="RemovePodSandbox \"0a82747c949aba0275de6efd798f32d0cb1a02cdbec27cb0c7ef7311831c9539\" returns successfully" Jan 24 00:36:26.723325 containerd[1515]: time="2026-01-24T00:36:26.723300803Z" level=info msg="StopPodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\"" Jan 24 00:36:26.727803 containerd[1515]: time="2026-01-24T00:36:26.727712801Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:26.729476 containerd[1515]: time="2026-01-24T00:36:26.729448844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:36:26.730362 containerd[1515]: time="2026-01-24T00:36:26.729516345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:36:26.730396 kubelet[2555]: E0124 00:36:26.729610 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:26.730396 kubelet[2555]: E0124 00:36:26.729647 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:26.730396 kubelet[2555]: E0124 00:36:26.729738 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:26.731766 containerd[1515]: time="2026-01-24T00:36:26.731597309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.752 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88376c0e-7993-4786-9815-0474220bc333", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e", Pod:"calico-apiserver-79c764d8b9-zh62f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a3f8a21d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.753 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.753 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" iface="eth0" netns="" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.753 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.753 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.768 [INFO][5244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.768 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.768 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.772 [WARNING][5244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.772 [INFO][5244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.773 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.777604 containerd[1515]: 2026-01-24 00:36:26.775 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.777604 containerd[1515]: time="2026-01-24T00:36:26.777538214Z" level=info msg="TearDown network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" successfully" Jan 24 00:36:26.777604 containerd[1515]: time="2026-01-24T00:36:26.777561287Z" level=info msg="StopPodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" returns successfully" Jan 24 00:36:26.778452 containerd[1515]: time="2026-01-24T00:36:26.778432229Z" level=info msg="RemovePodSandbox for \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\"" Jan 24 00:36:26.778488 containerd[1515]: time="2026-01-24T00:36:26.778452802Z" level=info msg="Forcibly stopping sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\"" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.802 [WARNING][5258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88376c0e-7993-4786-9815-0474220bc333", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"8b4180ddc1e9af3f5eca4f6043d8555c31dcfa6deddc255b074b3eb7c54ec76e", Pod:"calico-apiserver-79c764d8b9-zh62f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a3f8a21d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.803 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.803 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" iface="eth0" netns="" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.803 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.803 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.817 [INFO][5265] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.818 [INFO][5265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.818 [INFO][5265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.823 [WARNING][5265] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.823 [INFO][5265] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" HandleID="k8s-pod-network.abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--zh62f-eth0" Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.824 [INFO][5265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.827280 containerd[1515]: 2026-01-24 00:36:26.825 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff" Jan 24 00:36:26.827595 containerd[1515]: time="2026-01-24T00:36:26.827307948Z" level=info msg="TearDown network for sandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" successfully" Jan 24 00:36:26.831447 containerd[1515]: time="2026-01-24T00:36:26.831407390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:26.831486 containerd[1515]: time="2026-01-24T00:36:26.831445106Z" level=info msg="RemovePodSandbox \"abbc479d4e2bacc540b6641b87eddb35a8daf631df4a2df8d2efb67022d153ff\" returns successfully" Jan 24 00:36:26.831862 containerd[1515]: time="2026-01-24T00:36:26.831797348Z" level=info msg="StopPodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\"" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.854 [WARNING][5279] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.854 [INFO][5279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.854 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" iface="eth0" netns="" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.854 [INFO][5279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.854 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.868 [INFO][5286] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.868 [INFO][5286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.868 [INFO][5286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.873 [WARNING][5286] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.873 [INFO][5286] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.874 [INFO][5286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.877557 containerd[1515]: 2026-01-24 00:36:26.876 [INFO][5279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.877825 containerd[1515]: time="2026-01-24T00:36:26.877574288Z" level=info msg="TearDown network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" successfully" Jan 24 00:36:26.877825 containerd[1515]: time="2026-01-24T00:36:26.877594591Z" level=info msg="StopPodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" returns successfully" Jan 24 00:36:26.878492 containerd[1515]: time="2026-01-24T00:36:26.878257652Z" level=info msg="RemovePodSandbox for \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\"" Jan 24 00:36:26.878492 containerd[1515]: time="2026-01-24T00:36:26.878280485Z" level=info msg="Forcibly stopping sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\"" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.901 [WARNING][5301] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" WorkloadEndpoint="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.901 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.901 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" iface="eth0" netns="" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.901 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.901 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.915 [INFO][5308] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.915 [INFO][5308] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.915 [INFO][5308] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.919 [WARNING][5308] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.919 [INFO][5308] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" HandleID="k8s-pod-network.3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Workload="ci--4081--3--6--n--56b1d28098-k8s-whisker--6f8b46cc7d--cmcg7-eth0" Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.920 [INFO][5308] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.923631 containerd[1515]: 2026-01-24 00:36:26.922 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62" Jan 24 00:36:26.923948 containerd[1515]: time="2026-01-24T00:36:26.923633930Z" level=info msg="TearDown network for sandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" successfully" Jan 24 00:36:26.927297 containerd[1515]: time="2026-01-24T00:36:26.927248547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:26.927297 containerd[1515]: time="2026-01-24T00:36:26.927294824Z" level=info msg="RemovePodSandbox \"3b9e5867907cb2eea8662418433e2c5755daf3f9543f0d510ae73e11a9062a62\" returns successfully" Jan 24 00:36:26.927685 containerd[1515]: time="2026-01-24T00:36:26.927665441Z" level=info msg="StopPodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\"" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.951 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"641bc171-0396-4a65-b184-ec8db27324ea", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994", Pod:"csi-node-driver-njp75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic74878a7719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.951 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.951 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" iface="eth0" netns="" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.951 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.951 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.969 [INFO][5329] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.969 [INFO][5329] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.969 [INFO][5329] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.973 [WARNING][5329] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.973 [INFO][5329] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.975 [INFO][5329] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:26.979450 containerd[1515]: 2026-01-24 00:36:26.977 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:26.980308 containerd[1515]: time="2026-01-24T00:36:26.979476274Z" level=info msg="TearDown network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" successfully" Jan 24 00:36:26.980308 containerd[1515]: time="2026-01-24T00:36:26.979503428Z" level=info msg="StopPodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" returns successfully" Jan 24 00:36:26.980308 containerd[1515]: time="2026-01-24T00:36:26.979915631Z" level=info msg="RemovePodSandbox for \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\"" Jan 24 00:36:26.980308 containerd[1515]: time="2026-01-24T00:36:26.979950466Z" level=info msg="Forcibly stopping sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\"" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.005 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"641bc171-0396-4a65-b184-ec8db27324ea", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"898aa6d8037cfc1aeb38389c92652c4d20fc65f94e49be625b798bfdecf44994", Pod:"csi-node-driver-njp75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic74878a7719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.005 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.005 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" iface="eth0" netns="" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.005 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.005 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.021 [INFO][5350] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.021 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.021 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.025 [WARNING][5350] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.025 [INFO][5350] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" HandleID="k8s-pod-network.5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Workload="ci--4081--3--6--n--56b1d28098-k8s-csi--node--driver--njp75-eth0" Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.026 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.029912 containerd[1515]: 2026-01-24 00:36:27.028 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f" Jan 24 00:36:27.029912 containerd[1515]: time="2026-01-24T00:36:27.029871992Z" level=info msg="TearDown network for sandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" successfully" Jan 24 00:36:27.033523 containerd[1515]: time="2026-01-24T00:36:27.033414630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:27.033523 containerd[1515]: time="2026-01-24T00:36:27.033453616Z" level=info msg="RemovePodSandbox \"5f39d8ffea70a7fc950b7b6164697c58707465030afb6d5d81ce6240757fdb8f\" returns successfully" Jan 24 00:36:27.034188 containerd[1515]: time="2026-01-24T00:36:27.033973613Z" level=info msg="StopPodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\"" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.057 [WARNING][5364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4761a40-d4c5-46a6-ba7c-5af41f9766d5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375", Pod:"calico-apiserver-79c764d8b9-6vp5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali115deebefb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.057 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.057 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" iface="eth0" netns="" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.057 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.057 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.071 [INFO][5371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.071 [INFO][5371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.071 [INFO][5371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.075 [WARNING][5371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.075 [INFO][5371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.076 [INFO][5371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.079549 containerd[1515]: 2026-01-24 00:36:27.077 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.079873 containerd[1515]: time="2026-01-24T00:36:27.079578813Z" level=info msg="TearDown network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" successfully" Jan 24 00:36:27.079873 containerd[1515]: time="2026-01-24T00:36:27.079599096Z" level=info msg="StopPodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" returns successfully" Jan 24 00:36:27.080148 containerd[1515]: time="2026-01-24T00:36:27.080078958Z" level=info msg="RemovePodSandbox for \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\"" Jan 24 00:36:27.080148 containerd[1515]: time="2026-01-24T00:36:27.080100121Z" level=info msg="Forcibly stopping sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\"" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.105 [WARNING][5386] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0", GenerateName:"calico-apiserver-79c764d8b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4761a40-d4c5-46a6-ba7c-5af41f9766d5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c764d8b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"0f5fdfd6b126817768073191453251bd5ffb23eead69512b90d74223f2aed375", Pod:"calico-apiserver-79c764d8b9-6vp5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali115deebefb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.105 [INFO][5386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.105 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" iface="eth0" netns="" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.105 [INFO][5386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.105 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.121 [INFO][5394] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.121 [INFO][5394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.121 [INFO][5394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.126 [WARNING][5394] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.126 [INFO][5394] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" HandleID="k8s-pod-network.86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--apiserver--79c764d8b9--6vp5z-eth0" Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.127 [INFO][5394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.131460 containerd[1515]: 2026-01-24 00:36:27.128 [INFO][5386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560" Jan 24 00:36:27.131460 containerd[1515]: time="2026-01-24T00:36:27.130250009Z" level=info msg="TearDown network for sandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" successfully" Jan 24 00:36:27.134403 containerd[1515]: time="2026-01-24T00:36:27.134378274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:27.134447 containerd[1515]: time="2026-01-24T00:36:27.134415880Z" level=info msg="RemovePodSandbox \"86aa2bf56518d7bbeab6cfe5875891ab598c30b2594c9fae9ec70abca5dac560\" returns successfully" Jan 24 00:36:27.134797 containerd[1515]: time="2026-01-24T00:36:27.134778884Z" level=info msg="StopPodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\"" Jan 24 00:36:27.155835 containerd[1515]: time="2026-01-24T00:36:27.155778445Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:27.156962 containerd[1515]: time="2026-01-24T00:36:27.156900562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:36:27.157077 containerd[1515]: time="2026-01-24T00:36:27.157049204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:36:27.157410 kubelet[2555]: E0124 00:36:27.157375 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:27.157494 kubelet[2555]: E0124 00:36:27.157420 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:27.157542 kubelet[2555]: E0124 00:36:27.157510 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:27.159721 kubelet[2555]: E0124 00:36:27.158816 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.158 [WARNING][5408] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0", GenerateName:"calico-kube-controllers-6494d5bd79-", Namespace:"calico-system", SelfLink:"", UID:"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6494d5bd79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406", Pod:"calico-kube-controllers-6494d5bd79-znrpb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8cc7da45119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.159 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.159 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" iface="eth0" netns="" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.159 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.159 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.176 [INFO][5415] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.176 [INFO][5415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.177 [INFO][5415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.181 [WARNING][5415] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.181 [INFO][5415] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.182 [INFO][5415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.185422 containerd[1515]: 2026-01-24 00:36:27.183 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.185739 containerd[1515]: time="2026-01-24T00:36:27.185454340Z" level=info msg="TearDown network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" successfully" Jan 24 00:36:27.185739 containerd[1515]: time="2026-01-24T00:36:27.185484534Z" level=info msg="StopPodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" returns successfully" Jan 24 00:36:27.187130 containerd[1515]: time="2026-01-24T00:36:27.186144863Z" level=info msg="RemovePodSandbox for \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\"" Jan 24 00:36:27.187130 containerd[1515]: time="2026-01-24T00:36:27.186171617Z" level=info msg="Forcibly stopping sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\"" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.211 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0", GenerateName:"calico-kube-controllers-6494d5bd79-", Namespace:"calico-system", SelfLink:"", UID:"3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6494d5bd79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"2eed52a4bd5d8eced89c9e8066fb0878bf7b7375d9257d572683c3317aa63406", Pod:"calico-kube-controllers-6494d5bd79-znrpb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8cc7da45119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.212 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.212 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" iface="eth0" netns="" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.212 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.212 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.226 [INFO][5436] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.226 [INFO][5436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.226 [INFO][5436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.230 [WARNING][5436] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.231 [INFO][5436] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" HandleID="k8s-pod-network.f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Workload="ci--4081--3--6--n--56b1d28098-k8s-calico--kube--controllers--6494d5bd79--znrpb-eth0" Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.232 [INFO][5436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.235290 containerd[1515]: 2026-01-24 00:36:27.233 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338" Jan 24 00:36:27.235609 containerd[1515]: time="2026-01-24T00:36:27.235341478Z" level=info msg="TearDown network for sandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" successfully" Jan 24 00:36:27.239198 containerd[1515]: time="2026-01-24T00:36:27.239170528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:27.239279 containerd[1515]: time="2026-01-24T00:36:27.239225687Z" level=info msg="RemovePodSandbox \"f421c3bb0ef7d42af10db1a97c40ec16fa1851d8a1f0f81c000e64588bb50338\" returns successfully" Jan 24 00:36:27.239648 containerd[1515]: time="2026-01-24T00:36:27.239628437Z" level=info msg="StopPodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\"" Jan 24 00:36:27.275573 containerd[1515]: time="2026-01-24T00:36:27.275515408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.264 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f50723bb-0fb2-4f3f-b014-3c7c00d05077", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f", Pod:"coredns-674b8bbfcf-wtrng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99b7d76b661", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.264 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.264 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" iface="eth0" netns="" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.264 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.264 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.281 [INFO][5457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.281 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.281 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.286 [WARNING][5457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.286 [INFO][5457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.287 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.290402 containerd[1515]: 2026-01-24 00:36:27.288 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.290997 containerd[1515]: time="2026-01-24T00:36:27.290373852Z" level=info msg="TearDown network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" successfully" Jan 24 00:36:27.290997 containerd[1515]: time="2026-01-24T00:36:27.290993665Z" level=info msg="StopPodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" returns successfully" Jan 24 00:36:27.291362 containerd[1515]: time="2026-01-24T00:36:27.291346768Z" level=info msg="RemovePodSandbox for \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\"" Jan 24 00:36:27.291412 containerd[1515]: time="2026-01-24T00:36:27.291369381Z" level=info msg="Forcibly stopping sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\"" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.317 [WARNING][5471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f50723bb-0fb2-4f3f-b014-3c7c00d05077", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-56b1d28098", ContainerID:"a333a9b232811a9faae65a201d3b22c079177cbdddb7302b90923eae33ce707f", Pod:"coredns-674b8bbfcf-wtrng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99b7d76b661", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.317 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.317 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" iface="eth0" netns="" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.317 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.317 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.332 [INFO][5479] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.332 [INFO][5479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.332 [INFO][5479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.339 [WARNING][5479] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.339 [INFO][5479] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" HandleID="k8s-pod-network.a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Workload="ci--4081--3--6--n--56b1d28098-k8s-coredns--674b8bbfcf--wtrng-eth0" Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.340 [INFO][5479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:36:27.343488 containerd[1515]: 2026-01-24 00:36:27.341 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d" Jan 24 00:36:27.343817 containerd[1515]: time="2026-01-24T00:36:27.343506325Z" level=info msg="TearDown network for sandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" successfully" Jan 24 00:36:27.347202 containerd[1515]: time="2026-01-24T00:36:27.347164850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:36:27.347258 containerd[1515]: time="2026-01-24T00:36:27.347206357Z" level=info msg="RemovePodSandbox \"a97755dbbd7554dabd22ee5c77c5e6f2b354183ede267540f9e58185c230614d\" returns successfully" Jan 24 00:36:27.711157 containerd[1515]: time="2026-01-24T00:36:27.711085481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:27.714161 containerd[1515]: time="2026-01-24T00:36:27.713804557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:27.714161 containerd[1515]: time="2026-01-24T00:36:27.713868946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:27.714426 kubelet[2555]: E0124 00:36:27.714247 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:27.714426 kubelet[2555]: E0124 00:36:27.714315 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:27.714608 kubelet[2555]: E0124 00:36:27.714486 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmf64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-zh62f_calico-apiserver(88376c0e-7993-4786-9815-0474220bc333): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:27.716187 kubelet[2555]: E0124 00:36:27.716112 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:36:29.274915 containerd[1515]: time="2026-01-24T00:36:29.274519875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:29.715174 containerd[1515]: time="2026-01-24T00:36:29.715106647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:29.716891 containerd[1515]: time="2026-01-24T00:36:29.716673363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:29.716891 containerd[1515]: time="2026-01-24T00:36:29.716818474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:29.718393 kubelet[2555]: E0124 00:36:29.718329 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:29.719796 kubelet[2555]: E0124 00:36:29.719010 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:29.719796 kubelet[2555]: E0124 00:36:29.719280 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdr4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-6vp5z_calico-apiserver(f4761a40-d4c5-46a6-ba7c-5af41f9766d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:29.720822 kubelet[2555]: E0124 00:36:29.720608 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:36:31.275127 kubelet[2555]: E0124 00:36:31.274848 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:36:39.274687 kubelet[2555]: E0124 00:36:39.274609 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:36:40.277292 kubelet[2555]: E0124 00:36:40.276151 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:36:40.277292 kubelet[2555]: E0124 00:36:40.277167 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:36:40.285339 kubelet[2555]: E0124 00:36:40.285266 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:36:43.276023 kubelet[2555]: E0124 00:36:43.275913 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:36:44.280619 containerd[1515]: time="2026-01-24T00:36:44.280536269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:36:44.718535 containerd[1515]: time="2026-01-24T00:36:44.718479817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:44.720526 containerd[1515]: time="2026-01-24T00:36:44.720403581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:36:44.720526 containerd[1515]: time="2026-01-24T00:36:44.720482668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:36:44.720693 kubelet[2555]: E0124 00:36:44.720638 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:44.720693 kubelet[2555]: E0124 00:36:44.720679 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:44.720996 kubelet[2555]: E0124 00:36:44.720783 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b3c08fc22324961a4ad528b035b9863,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:44.723850 containerd[1515]: time="2026-01-24T00:36:44.723464364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:36:45.152345 containerd[1515]: time="2026-01-24T00:36:45.152206906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:45.154552 containerd[1515]: time="2026-01-24T00:36:45.154469643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:36:45.154632 containerd[1515]: time="2026-01-24T00:36:45.154584006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:36:45.155817 kubelet[2555]: E0124 00:36:45.155084 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:45.155817 kubelet[2555]: E0124 00:36:45.155138 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:45.155817 kubelet[2555]: E0124 00:36:45.155236 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:45.157112 kubelet[2555]: E0124 00:36:45.157082 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:36:51.276840 containerd[1515]: time="2026-01-24T00:36:51.275391310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:36:51.906202 containerd[1515]: time="2026-01-24T00:36:51.905770016Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:51.907671 containerd[1515]: time="2026-01-24T00:36:51.907548868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:36:51.907790 containerd[1515]: time="2026-01-24T00:36:51.907700341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:36:51.908309 kubelet[2555]: E0124 00:36:51.908192 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:51.908309 kubelet[2555]: E0124 00:36:51.908285 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:51.909600 kubelet[2555]: E0124 00:36:51.908450 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:51.911105 containerd[1515]: time="2026-01-24T00:36:51.911063806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:36:52.354767 containerd[1515]: time="2026-01-24T00:36:52.354514308Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:52.356366 containerd[1515]: time="2026-01-24T00:36:52.356190472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:36:52.356366 containerd[1515]: time="2026-01-24T00:36:52.356301730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:36:52.358450 kubelet[2555]: E0124 00:36:52.358175 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:52.358450 kubelet[2555]: E0124 00:36:52.358246 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:52.359128 kubelet[2555]: E0124 00:36:52.358530 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:52.360816 kubelet[2555]: E0124 00:36:52.360307 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:36:52.361167 containerd[1515]: time="2026-01-24T00:36:52.360420656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:52.824464 containerd[1515]: time="2026-01-24T00:36:52.824235319Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:52.825865 containerd[1515]: time="2026-01-24T00:36:52.825705374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:52.825865 containerd[1515]: time="2026-01-24T00:36:52.825821052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:52.827142 kubelet[2555]: E0124 00:36:52.826481 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:52.827142 kubelet[2555]: E0124 00:36:52.826542 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:52.827142 kubelet[2555]: E0124 00:36:52.826684 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmf64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-zh62f_calico-apiserver(88376c0e-7993-4786-9815-0474220bc333): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:52.828321 kubelet[2555]: E0124 00:36:52.828285 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:36:53.276773 containerd[1515]: time="2026-01-24T00:36:53.276715960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:36:53.720367 containerd[1515]: time="2026-01-24T00:36:53.720111334Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:53.722279 containerd[1515]: time="2026-01-24T00:36:53.722151491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:36:53.722279 containerd[1515]: time="2026-01-24T00:36:53.722230163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:36:53.725180 kubelet[2555]: E0124 00:36:53.722527 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:53.725180 kubelet[2555]: E0124 00:36:53.723129 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:53.725180 kubelet[2555]: E0124 00:36:53.723373 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vctxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6494d5bd79-znrpb_calico-system(3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:53.725180 kubelet[2555]: E0124 00:36:53.724990 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:36:54.279700 containerd[1515]: time="2026-01-24T00:36:54.279645215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:36:54.707764 containerd[1515]: time="2026-01-24T00:36:54.706982818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:54.709529 containerd[1515]: time="2026-01-24T00:36:54.709144885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:36:54.709529 containerd[1515]: time="2026-01-24T00:36:54.709208919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:54.710960 kubelet[2555]: E0124 00:36:54.710183 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:54.710960 kubelet[2555]: E0124 00:36:54.710260 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:54.712044 kubelet[2555]: E0124 00:36:54.711658 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrslm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v8smt_calico-system(639522bb-4ded-4c6d-8204-2dc920251ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:54.713539 kubelet[2555]: E0124 00:36:54.713429 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:36:56.279236 containerd[1515]: time="2026-01-24T00:36:56.278854546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:56.718084 containerd[1515]: time="2026-01-24T00:36:56.718010089Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:36:56.720170 containerd[1515]: time="2026-01-24T00:36:56.720059028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:36:56.720713 containerd[1515]: time="2026-01-24T00:36:56.720328225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:56.721136 kubelet[2555]: E0124 00:36:56.721084 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:56.721622 kubelet[2555]: E0124 00:36:56.721142 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:56.721622 kubelet[2555]: E0124 00:36:56.721310 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdr4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-6vp5z_calico-apiserver(f4761a40-d4c5-46a6-ba7c-5af41f9766d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:56.723814 kubelet[2555]: E0124 00:36:56.723079 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:36:58.278188 kubelet[2555]: E0124 00:36:58.277754 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:37:05.280842 kubelet[2555]: E0124 00:37:05.280723 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:37:05.286173 kubelet[2555]: E0124 00:37:05.286014 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:37:07.277060 kubelet[2555]: E0124 00:37:07.275198 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:37:08.277669 kubelet[2555]: E0124 00:37:08.275243 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:37:09.275070 kubelet[2555]: E0124 00:37:09.274671 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:37:10.278213 kubelet[2555]: E0124 00:37:10.278173 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:37:11.116379 systemd[1]: Started sshd@7-65.21.184.255:22-20.161.92.111:50708.service - OpenSSH per-connection server daemon (20.161.92.111:50708). Jan 24 00:37:11.898400 sshd[5554]: Accepted publickey for core from 20.161.92.111 port 50708 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:11.903710 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:11.915752 systemd-logind[1487]: New session 8 of user core. Jan 24 00:37:11.925700 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:37:12.541213 sshd[5554]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:12.548728 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:37:12.549558 systemd[1]: sshd@7-65.21.184.255:22-20.161.92.111:50708.service: Deactivated successfully. Jan 24 00:37:12.554072 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:37:12.556068 systemd-logind[1487]: Removed session 8. Jan 24 00:37:16.275429 kubelet[2555]: E0124 00:37:16.275163 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:37:17.684597 systemd[1]: Started sshd@8-65.21.184.255:22-20.161.92.111:44054.service - OpenSSH per-connection server daemon (20.161.92.111:44054). Jan 24 00:37:18.470668 sshd[5571]: Accepted publickey for core from 20.161.92.111 port 44054 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:18.472613 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:18.478523 systemd-logind[1487]: New session 9 of user core. Jan 24 00:37:18.483394 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:37:19.058433 sshd[5571]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:19.067516 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:37:19.070135 systemd[1]: sshd@8-65.21.184.255:22-20.161.92.111:44054.service: Deactivated successfully. Jan 24 00:37:19.079804 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:37:19.083608 systemd-logind[1487]: Removed session 9. Jan 24 00:37:20.276092 kubelet[2555]: E0124 00:37:20.276036 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:37:21.275454 kubelet[2555]: E0124 00:37:21.275355 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:37:21.276890 kubelet[2555]: E0124 00:37:21.276671 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:37:22.275405 kubelet[2555]: E0124 00:37:22.275287 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:37:23.277018 kubelet[2555]: E0124 00:37:23.275503 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:37:24.207613 systemd[1]: Started sshd@9-65.21.184.255:22-20.161.92.111:60474.service - OpenSSH per-connection server daemon (20.161.92.111:60474). Jan 24 00:37:24.970152 sshd[5585]: Accepted publickey for core from 20.161.92.111 port 60474 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:24.971879 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:24.976654 systemd-logind[1487]: New session 10 of user core. Jan 24 00:37:24.981201 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:37:25.609775 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:25.617153 systemd[1]: sshd@9-65.21.184.255:22-20.161.92.111:60474.service: Deactivated successfully. Jan 24 00:37:25.620731 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:37:25.622669 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:37:25.625347 systemd-logind[1487]: Removed session 10. Jan 24 00:37:25.749676 systemd[1]: Started sshd@10-65.21.184.255:22-20.161.92.111:60486.service - OpenSSH per-connection server daemon (20.161.92.111:60486). Jan 24 00:37:26.524850 sshd[5603]: Accepted publickey for core from 20.161.92.111 port 60486 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:26.527157 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:26.533068 systemd-logind[1487]: New session 11 of user core. Jan 24 00:37:26.539352 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:37:27.139575 sshd[5603]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:27.145783 systemd[1]: sshd@10-65.21.184.255:22-20.161.92.111:60486.service: Deactivated successfully. Jan 24 00:37:27.150866 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:37:27.152353 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:37:27.154410 systemd-logind[1487]: Removed session 11. Jan 24 00:37:27.286349 systemd[1]: Started sshd@11-65.21.184.255:22-20.161.92.111:60488.service - OpenSSH per-connection server daemon (20.161.92.111:60488). Jan 24 00:37:28.053378 sshd[5617]: Accepted publickey for core from 20.161.92.111 port 60488 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:28.056318 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:28.064047 systemd-logind[1487]: New session 12 of user core. Jan 24 00:37:28.071270 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:37:28.639744 sshd[5617]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:28.650026 systemd[1]: sshd@11-65.21.184.255:22-20.161.92.111:60488.service: Deactivated successfully. Jan 24 00:37:28.655562 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:37:28.657044 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:37:28.658874 systemd-logind[1487]: Removed session 12. Jan 24 00:37:31.274971 kubelet[2555]: E0124 00:37:31.273053 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:37:32.276180 containerd[1515]: time="2026-01-24T00:37:32.275702410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:37:32.705597 containerd[1515]: time="2026-01-24T00:37:32.705462922Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:32.707219 containerd[1515]: time="2026-01-24T00:37:32.707003095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:37:32.707219 containerd[1515]: time="2026-01-24T00:37:32.707097628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:37:32.707593 kubelet[2555]: E0124 00:37:32.707379 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:37:32.707593 kubelet[2555]: E0124 00:37:32.707532 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:37:32.708439 kubelet[2555]: E0124 00:37:32.708184 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:32.710249 containerd[1515]: time="2026-01-24T00:37:32.710220585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:37:33.150790 containerd[1515]: time="2026-01-24T00:37:33.150570234Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:33.152413 containerd[1515]: time="2026-01-24T00:37:33.152309546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:37:33.152413 containerd[1515]: time="2026-01-24T00:37:33.152366918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:37:33.152602 kubelet[2555]: E0124 00:37:33.152550 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:37:33.152666 kubelet[2555]: E0124 00:37:33.152619 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:37:33.152809 kubelet[2555]: E0124 00:37:33.152759 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-njp75_calico-system(641bc171-0396-4a65-b184-ec8db27324ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:33.154286 kubelet[2555]: E0124 00:37:33.154244 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:37:33.276492 containerd[1515]: time="2026-01-24T00:37:33.276441945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:37:33.280793 kubelet[2555]: E0124 00:37:33.280751 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:37:33.708626 containerd[1515]: time="2026-01-24T00:37:33.708544235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:33.710195 containerd[1515]: time="2026-01-24T00:37:33.710138651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:37:33.710411 containerd[1515]: time="2026-01-24T00:37:33.710243244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:37:33.710490 kubelet[2555]: E0124 00:37:33.710435 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:37:33.711603 kubelet[2555]: E0124 00:37:33.710489 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:37:33.711603 kubelet[2555]: E0124 00:37:33.710663 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b3c08fc22324961a4ad528b035b9863,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:33.714358 containerd[1515]: time="2026-01-24T00:37:33.714295804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:37:33.776054 systemd[1]: Started sshd@12-65.21.184.255:22-20.161.92.111:37334.service - OpenSSH per-connection server daemon (20.161.92.111:37334). Jan 24 00:37:34.181402 containerd[1515]: time="2026-01-24T00:37:34.181206868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:34.183115 containerd[1515]: time="2026-01-24T00:37:34.183011644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:37:34.183234 containerd[1515]: time="2026-01-24T00:37:34.183131017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:37:34.184200 kubelet[2555]: E0124 00:37:34.183491 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:37:34.184200 kubelet[2555]: E0124 00:37:34.183573 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:37:34.184200 kubelet[2555]: E0124 00:37:34.183723 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-699d95d6f6-9xqqx_calico-system(b07048e0-47ed-414d-b89a-27e90221643c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:34.185466 kubelet[2555]: E0124 00:37:34.185392 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:37:34.557130 sshd[5632]: Accepted publickey for core from 20.161.92.111 port 37334 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:34.558933 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:34.563276 systemd-logind[1487]: New session 13 of user core. Jan 24 00:37:34.577029 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:37:35.136085 sshd[5632]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:35.142649 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:37:35.143165 systemd[1]: sshd@12-65.21.184.255:22-20.161.92.111:37334.service: Deactivated successfully. Jan 24 00:37:35.147133 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:37:35.151630 systemd-logind[1487]: Removed session 13. Jan 24 00:37:35.277485 systemd[1]: Started sshd@13-65.21.184.255:22-20.161.92.111:37344.service - OpenSSH per-connection server daemon (20.161.92.111:37344). Jan 24 00:37:35.279907 kubelet[2555]: E0124 00:37:35.279833 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:37:36.045837 sshd[5653]: Accepted publickey for core from 20.161.92.111 port 37344 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:36.046704 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:36.054087 systemd-logind[1487]: New session 14 of user core. Jan 24 00:37:36.059058 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:37:36.784159 sshd[5653]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:36.793347 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:37:36.794238 systemd[1]: sshd@13-65.21.184.255:22-20.161.92.111:37344.service: Deactivated successfully. Jan 24 00:37:36.799621 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:37:36.805625 systemd-logind[1487]: Removed session 14. Jan 24 00:37:36.933392 systemd[1]: Started sshd@14-65.21.184.255:22-20.161.92.111:37348.service - OpenSSH per-connection server daemon (20.161.92.111:37348). Jan 24 00:37:37.275317 containerd[1515]: time="2026-01-24T00:37:37.275204115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:37:37.706492 sshd[5664]: Accepted publickey for core from 20.161.92.111 port 37348 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:37.707840 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:37.708669 containerd[1515]: time="2026-01-24T00:37:37.707994838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:37.712722 containerd[1515]: time="2026-01-24T00:37:37.710269360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:37:37.712722 containerd[1515]: time="2026-01-24T00:37:37.710408345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:37:37.712883 kubelet[2555]: E0124 00:37:37.710801 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:37:37.712883 kubelet[2555]: E0124 00:37:37.710865 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:37:37.712883 kubelet[2555]: E0124 00:37:37.711083 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vctxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6494d5bd79-znrpb_calico-system(3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:37.718084 kubelet[2555]: E0124 00:37:37.717124 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:37:37.719053 systemd-logind[1487]: New session 15 of user core. Jan 24 00:37:37.725001 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:37:39.020455 sshd[5664]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:39.028528 systemd[1]: sshd@14-65.21.184.255:22-20.161.92.111:37348.service: Deactivated successfully. Jan 24 00:37:39.032983 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:37:39.034732 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:37:39.036913 systemd-logind[1487]: Removed session 15. Jan 24 00:37:39.160166 systemd[1]: Started sshd@15-65.21.184.255:22-20.161.92.111:37358.service - OpenSSH per-connection server daemon (20.161.92.111:37358). Jan 24 00:37:39.931828 sshd[5685]: Accepted publickey for core from 20.161.92.111 port 37358 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:39.936878 sshd[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:39.946299 systemd-logind[1487]: New session 16 of user core. Jan 24 00:37:39.954203 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:37:40.753222 sshd[5685]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:40.759343 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:37:40.761863 systemd[1]: sshd@15-65.21.184.255:22-20.161.92.111:37358.service: Deactivated successfully. Jan 24 00:37:40.767689 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:37:40.772591 systemd-logind[1487]: Removed session 16. Jan 24 00:37:40.887991 systemd[1]: Started sshd@16-65.21.184.255:22-20.161.92.111:37364.service - OpenSSH per-connection server daemon (20.161.92.111:37364). Jan 24 00:37:41.665273 sshd[5717]: Accepted publickey for core from 20.161.92.111 port 37364 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:41.669349 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:41.682105 systemd-logind[1487]: New session 17 of user core. Jan 24 00:37:41.692275 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:37:42.315504 sshd[5717]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:42.321188 systemd[1]: sshd@16-65.21.184.255:22-20.161.92.111:37364.service: Deactivated successfully. Jan 24 00:37:42.326539 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:37:42.332785 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:37:42.336638 systemd-logind[1487]: Removed session 17. Jan 24 00:37:43.275258 containerd[1515]: time="2026-01-24T00:37:43.275120832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:37:43.913457 containerd[1515]: time="2026-01-24T00:37:43.913377501Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:43.915492 containerd[1515]: time="2026-01-24T00:37:43.915267283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:37:43.915492 containerd[1515]: time="2026-01-24T00:37:43.915390288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:37:43.917787 kubelet[2555]: E0124 00:37:43.917108 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:43.917787 kubelet[2555]: E0124 00:37:43.917183 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:43.917787 kubelet[2555]: E0124 00:37:43.917368 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmf64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-zh62f_calico-apiserver(88376c0e-7993-4786-9815-0474220bc333): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:43.918748 kubelet[2555]: E0124 00:37:43.918680 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:37:44.275681 kubelet[2555]: E0124 00:37:44.275020 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:37:47.276904 containerd[1515]: time="2026-01-24T00:37:47.275919615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:37:47.449140 systemd[1]: Started sshd@17-65.21.184.255:22-20.161.92.111:60216.service - OpenSSH per-connection server daemon (20.161.92.111:60216). Jan 24 00:37:47.718092 containerd[1515]: time="2026-01-24T00:37:47.717810961Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:47.720898 containerd[1515]: time="2026-01-24T00:37:47.719925782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:37:47.720898 containerd[1515]: time="2026-01-24T00:37:47.720051108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:37:47.721114 kubelet[2555]: E0124 00:37:47.720203 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:37:47.721114 kubelet[2555]: E0124 00:37:47.720268 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:37:47.721114 kubelet[2555]: E0124 00:37:47.720423 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrslm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v8smt_calico-system(639522bb-4ded-4c6d-8204-2dc920251ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:47.722053 kubelet[2555]: E0124 00:37:47.721990 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:37:48.207136 sshd[5739]: Accepted publickey for core from 20.161.92.111 port 60216 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:48.210512 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:48.219552 systemd-logind[1487]: New session 18 of user core. Jan 24 00:37:48.228190 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:37:48.280817 kubelet[2555]: E0124 00:37:48.280709 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:37:48.813392 sshd[5739]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:48.819987 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:37:48.821582 systemd[1]: sshd@17-65.21.184.255:22-20.161.92.111:60216.service: Deactivated successfully. Jan 24 00:37:48.829569 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:37:48.834770 systemd-logind[1487]: Removed session 18. Jan 24 00:37:50.274535 containerd[1515]: time="2026-01-24T00:37:50.274506676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:37:50.697590 containerd[1515]: time="2026-01-24T00:37:50.697431252Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:37:50.698760 containerd[1515]: time="2026-01-24T00:37:50.698713066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:37:50.699028 containerd[1515]: time="2026-01-24T00:37:50.698861594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:37:50.699106 kubelet[2555]: E0124 00:37:50.699068 2555 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:50.699493 kubelet[2555]: E0124 00:37:50.699119 2555 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:50.699493 kubelet[2555]: E0124 00:37:50.699247 2555 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdr4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79c764d8b9-6vp5z_calico-apiserver(f4761a40-d4c5-46a6-ba7c-5af41f9766d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:50.701348 kubelet[2555]: E0124 00:37:50.701307 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:37:51.275310 kubelet[2555]: E0124 00:37:51.275244 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:37:53.949189 systemd[1]: Started sshd@18-65.21.184.255:22-20.161.92.111:55804.service - OpenSSH per-connection server daemon (20.161.92.111:55804). Jan 24 00:37:54.708059 sshd[5767]: Accepted publickey for core from 20.161.92.111 port 55804 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:37:54.710180 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:54.720632 systemd-logind[1487]: New session 19 of user core. Jan 24 00:37:54.728360 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:37:55.276153 kubelet[2555]: E0124 00:37:55.276041 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:37:55.339408 sshd[5767]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:55.343987 systemd[1]: sshd@18-65.21.184.255:22-20.161.92.111:55804.service: Deactivated successfully. Jan 24 00:37:55.345677 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:37:55.349134 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:37:55.351160 systemd-logind[1487]: Removed session 19. Jan 24 00:37:58.275750 kubelet[2555]: E0124 00:37:58.275347 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:37:59.278562 kubelet[2555]: E0124 00:37:59.278382 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:38:02.276122 kubelet[2555]: E0124 00:38:02.275676 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:38:03.274843 kubelet[2555]: E0124 00:38:03.274723 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:38:03.274843 kubelet[2555]: E0124 00:38:03.274782 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:38:08.275379 kubelet[2555]: E0124 00:38:08.275240 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:38:11.275008 kubelet[2555]: E0124 00:38:11.274902 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:38:12.275320 kubelet[2555]: E0124 00:38:12.275233 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:38:12.757291 kubelet[2555]: I0124 00:38:12.757208 2555 status_manager.go:895] "Failed to get status for pod" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44648->10.0.0.2:2379: read: connection timed out" Jan 24 00:38:12.758813 kubelet[2555]: E0124 00:38:12.756759 2555 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44564->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-79c764d8b9-6vp5z.188d83a584b9b6eb calico-apiserver 1482 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-79c764d8b9-6vp5z,UID:f4761a40-d4c5-46a6-ba7c-5af41f9766d5,APIVersion:v1,ResourceVersion:811,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-56b1d28098,},FirstTimestamp:2026-01-24 00:36:15 +0000 UTC,LastTimestamp:2026-01-24 00:38:02.275609458 +0000 UTC m=+156.201760404,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-56b1d28098,}" Jan 24 00:38:12.991713 systemd[1]: cri-containerd-65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839.scope: Deactivated successfully. Jan 24 00:38:12.992996 systemd[1]: cri-containerd-65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839.scope: Consumed 4.211s CPU time, 16.0M memory peak, 0B memory swap peak. Jan 24 00:38:13.035391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839-rootfs.mount: Deactivated successfully. Jan 24 00:38:13.045204 containerd[1515]: time="2026-01-24T00:38:13.044927480Z" level=info msg="shim disconnected" id=65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839 namespace=k8s.io Jan 24 00:38:13.045204 containerd[1515]: time="2026-01-24T00:38:13.045197947Z" level=warning msg="cleaning up after shim disconnected" id=65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839 namespace=k8s.io Jan 24 00:38:13.046124 containerd[1515]: time="2026-01-24T00:38:13.045218628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:13.400072 kubelet[2555]: E0124 00:38:13.399790 2555 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44730->10.0.0.2:2379: read: connection timed out" Jan 24 00:38:13.828851 kubelet[2555]: I0124 00:38:13.828801 2555 scope.go:117] "RemoveContainer" containerID="65c7fa4e072d91e72f0948e4eac2706540d4364bb1f8690248d325808ee98839" Jan 24 00:38:13.842918 containerd[1515]: time="2026-01-24T00:38:13.842853460Z" level=info msg="CreateContainer within sandbox \"829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:38:13.862064 containerd[1515]: time="2026-01-24T00:38:13.860453029Z" level=info msg="CreateContainer within sandbox \"829de6c52c1b673c120c73680c69b00deaf0771b10f3f4fc88d7a3f73be7a04a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"48387be9d2ba729cba4aae9cd81a3edc424c977c5e434373d5ebd0747c9795e0\"" Jan 24 00:38:13.862213 containerd[1515]: time="2026-01-24T00:38:13.862118328Z" level=info msg="StartContainer for \"48387be9d2ba729cba4aae9cd81a3edc424c977c5e434373d5ebd0747c9795e0\"" Jan 24 00:38:13.922173 systemd[1]: Started cri-containerd-48387be9d2ba729cba4aae9cd81a3edc424c977c5e434373d5ebd0747c9795e0.scope - libcontainer container 48387be9d2ba729cba4aae9cd81a3edc424c977c5e434373d5ebd0747c9795e0. Jan 24 00:38:14.001416 containerd[1515]: time="2026-01-24T00:38:14.001354893Z" level=info msg="StartContainer for \"48387be9d2ba729cba4aae9cd81a3edc424c977c5e434373d5ebd0747c9795e0\" returns successfully" Jan 24 00:38:14.094162 systemd[1]: cri-containerd-f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53.scope: Deactivated successfully. Jan 24 00:38:14.095132 systemd[1]: cri-containerd-f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53.scope: Consumed 19.485s CPU time. Jan 24 00:38:14.111923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53-rootfs.mount: Deactivated successfully. Jan 24 00:38:14.121896 containerd[1515]: time="2026-01-24T00:38:14.121840737Z" level=info msg="shim disconnected" id=f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53 namespace=k8s.io Jan 24 00:38:14.121896 containerd[1515]: time="2026-01-24T00:38:14.121892941Z" level=warning msg="cleaning up after shim disconnected" id=f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53 namespace=k8s.io Jan 24 00:38:14.122289 containerd[1515]: time="2026-01-24T00:38:14.121902071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:14.833109 kubelet[2555]: I0124 00:38:14.833034 2555 scope.go:117] "RemoveContainer" containerID="bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4" Jan 24 00:38:14.833837 kubelet[2555]: I0124 00:38:14.833446 2555 scope.go:117] "RemoveContainer" containerID="f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53" Jan 24 00:38:14.833837 kubelet[2555]: E0124 00:38:14.833728 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-5qqk8_tigera-operator(0aa4e390-1222-49f2-b874-105606b753dc)\"" pod="tigera-operator/tigera-operator-7dcd859c48-5qqk8" podUID="0aa4e390-1222-49f2-b874-105606b753dc" Jan 24 00:38:14.835752 containerd[1515]: time="2026-01-24T00:38:14.835693101Z" level=info msg="RemoveContainer for \"bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4\"" Jan 24 00:38:14.841559 containerd[1515]: time="2026-01-24T00:38:14.841494115Z" level=info msg="RemoveContainer for \"bb3c214b8ff34597a7425aa18a5d8c1d184eb118cfac9e13f4fd89eaa641aff4\" returns successfully" Jan 24 00:38:15.273905 kubelet[2555]: E0124 00:38:15.273845 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:38:15.273905 kubelet[2555]: E0124 00:38:15.273837 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:38:16.274122 kubelet[2555]: E0124 00:38:16.274032 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:38:19.079161 systemd[1]: cri-containerd-1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063.scope: Deactivated successfully. Jan 24 00:38:19.079752 systemd[1]: cri-containerd-1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063.scope: Consumed 2.050s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 24 00:38:19.129366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063-rootfs.mount: Deactivated successfully. Jan 24 00:38:19.136210 containerd[1515]: time="2026-01-24T00:38:19.136115886Z" level=info msg="shim disconnected" id=1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063 namespace=k8s.io Jan 24 00:38:19.136210 containerd[1515]: time="2026-01-24T00:38:19.136193542Z" level=warning msg="cleaning up after shim disconnected" id=1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063 namespace=k8s.io Jan 24 00:38:19.137368 containerd[1515]: time="2026-01-24T00:38:19.136214683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:19.274906 kubelet[2555]: E0124 00:38:19.274820 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:38:19.854379 kubelet[2555]: I0124 00:38:19.854280 2555 scope.go:117] "RemoveContainer" containerID="1e8cb66e9f721cbc97e95dddf8beecf29e9ce1ed100a1c91cf0525e90d67e063" Jan 24 00:38:19.857283 containerd[1515]: time="2026-01-24T00:38:19.857200612Z" level=info msg="CreateContainer within sandbox \"7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:38:19.880850 containerd[1515]: time="2026-01-24T00:38:19.880777802Z" level=info msg="CreateContainer within sandbox \"7698717a34af0476189fb1375339bbd34afaf1e269914d3a7ffbe300cd061817\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"43fa160b6ef74e146e9a1df6fb34d22ce0556cb662a97fecda05404f84c676c5\"" Jan 24 00:38:19.882248 containerd[1515]: time="2026-01-24T00:38:19.882196978Z" level=info msg="StartContainer for \"43fa160b6ef74e146e9a1df6fb34d22ce0556cb662a97fecda05404f84c676c5\"" Jan 24 00:38:19.944166 systemd[1]: Started cri-containerd-43fa160b6ef74e146e9a1df6fb34d22ce0556cb662a97fecda05404f84c676c5.scope - libcontainer container 43fa160b6ef74e146e9a1df6fb34d22ce0556cb662a97fecda05404f84c676c5. Jan 24 00:38:20.030277 containerd[1515]: time="2026-01-24T00:38:20.030198485Z" level=info msg="StartContainer for \"43fa160b6ef74e146e9a1df6fb34d22ce0556cb662a97fecda05404f84c676c5\" returns successfully" Jan 24 00:38:23.273816 kubelet[2555]: E0124 00:38:23.273747 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-zh62f" podUID="88376c0e-7993-4786-9815-0474220bc333" Jan 24 00:38:23.401003 kubelet[2555]: E0124 00:38:23.400775 2555 controller.go:195] "Failed to update lease" err="Put \"https://65.21.184.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-56b1d28098?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:38:25.273801 kubelet[2555]: E0124 00:38:25.273659 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-699d95d6f6-9xqqx" podUID="b07048e0-47ed-414d-b89a-27e90221643c" Jan 24 00:38:26.273916 kubelet[2555]: E0124 00:38:26.273760 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c764d8b9-6vp5z" podUID="f4761a40-d4c5-46a6-ba7c-5af41f9766d5" Jan 24 00:38:27.274017 kubelet[2555]: E0124 00:38:27.273781 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v8smt" podUID="639522bb-4ded-4c6d-8204-2dc920251ed9" Jan 24 00:38:28.273741 kubelet[2555]: I0124 00:38:28.272854 2555 scope.go:117] "RemoveContainer" containerID="f5b080269e94d45485feb08c62e62d95d1c466d386c907c6bb8e3a13a64beb53" Jan 24 00:38:28.274488 kubelet[2555]: E0124 00:38:28.274412 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6494d5bd79-znrpb" podUID="3380c1f2-8b6a-4c4c-8029-b87f9aa9e7d9" Jan 24 00:38:28.282251 containerd[1515]: time="2026-01-24T00:38:28.281380546Z" level=info msg="CreateContainer within sandbox \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jan 24 00:38:28.302329 containerd[1515]: time="2026-01-24T00:38:28.301873225Z" level=info msg="CreateContainer within sandbox \"15b3afb52aceccd7d898946dfc9cdae8df9e661d1e84abfb798dc2c780c6fbf5\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"770b0ed319c83570e5d371c90615a8b166bd26b65a3ae3f3255c3078ae29ee81\"" Jan 24 00:38:28.305040 containerd[1515]: time="2026-01-24T00:38:28.304166558Z" level=info msg="StartContainer for \"770b0ed319c83570e5d371c90615a8b166bd26b65a3ae3f3255c3078ae29ee81\"" Jan 24 00:38:28.306863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914487007.mount: Deactivated successfully. Jan 24 00:38:28.368175 systemd[1]: Started cri-containerd-770b0ed319c83570e5d371c90615a8b166bd26b65a3ae3f3255c3078ae29ee81.scope - libcontainer container 770b0ed319c83570e5d371c90615a8b166bd26b65a3ae3f3255c3078ae29ee81. Jan 24 00:38:28.397611 containerd[1515]: time="2026-01-24T00:38:28.397559256Z" level=info msg="StartContainer for \"770b0ed319c83570e5d371c90615a8b166bd26b65a3ae3f3255c3078ae29ee81\" returns successfully" Jan 24 00:38:30.275596 kubelet[2555]: E0124 00:38:30.275400 2555 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njp75" podUID="641bc171-0396-4a65-b184-ec8db27324ea" Jan 24 00:38:33.402426 kubelet[2555]: E0124 00:38:33.402303 2555 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-6-n-56b1d28098)"