Jan 23 18:56:25.108302 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:56:25.108343 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:25.108364 kernel: BIOS-provided physical RAM map: Jan 23 18:56:25.108377 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 23 18:56:25.108390 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 23 18:56:25.108403 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 23 18:56:25.108419 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 23 18:56:25.108434 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 23 18:56:25.108448 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Jan 23 18:56:25.108465 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Jan 23 18:56:25.108479 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Jan 23 18:56:25.108492 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 23 18:56:25.108506 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 23 18:56:25.108520 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 23 18:56:25.108537 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 23 18:56:25.108556 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 23 18:56:25.108570 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 23 18:56:25.108585 kernel: NX (Execute Disable) protection: active Jan 23 18:56:25.108600 kernel: APIC: Static calls initialized Jan 23 18:56:25.108616 kernel: efi: EFI v2.7 by EDK II Jan 23 18:56:25.108631 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Jan 23 18:56:25.108647 kernel: random: crng init done Jan 23 18:56:25.108669 kernel: secureboot: Secure boot disabled Jan 23 18:56:25.108684 kernel: SMBIOS 2.4 present. Jan 23 18:56:25.108700 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 23 18:56:25.108719 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:56:25.108733 kernel: Hypervisor detected: KVM Jan 23 18:56:25.108748 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 23 18:56:25.108763 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:56:25.108778 kernel: kvm-clock: using sched offset of 15732050731 cycles Jan 23 18:56:25.108794 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:56:25.108827 kernel: tsc: Detected 2299.998 MHz processor Jan 23 18:56:25.108843 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:56:25.108859 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:56:25.108874 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 23 18:56:25.108894 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 23 18:56:25.108909 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:56:25.108924 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 23 18:56:25.108940 kernel: Using GB pages for direct mapping Jan 23 18:56:25.108955 kernel: ACPI: Early table checksum verification disabled Jan 23 18:56:25.108977 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 23 18:56:25.108994 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 23 18:56:25.109012 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 23 18:56:25.109028 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 23 18:56:25.109045 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 23 18:56:25.109061 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 23 18:56:25.109078 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 23 18:56:25.109094 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 23 18:56:25.109110 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 23 18:56:25.109140 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 23 18:56:25.109154 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 23 18:56:25.109170 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 23 18:56:25.109185 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 23 18:56:25.109199 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 23 18:56:25.109216 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 23 18:56:25.109232 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 23 18:56:25.109248 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 23 18:56:25.109265 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 23 18:56:25.109285 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 23 18:56:25.109302 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 23 18:56:25.109318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 18:56:25.109334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 23 18:56:25.109350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 23 18:56:25.109366 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jan 23 18:56:25.109382 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jan 23 18:56:25.109398 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jan 23 18:56:25.109416 kernel: Zone ranges: Jan 23 18:56:25.109437 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:56:25.109454 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:56:25.109470 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 23 18:56:25.109486 kernel: Device empty Jan 23 18:56:25.109503 kernel: Movable zone start for each node Jan 23 18:56:25.109518 kernel: Early memory node ranges Jan 23 18:56:25.109535 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 23 18:56:25.109551 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 23 18:56:25.109567 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Jan 23 18:56:25.109588 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Jan 23 18:56:25.109604 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 23 18:56:25.109621 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 23 18:56:25.109636 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 23 18:56:25.109661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:56:25.109678 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 23 18:56:25.109694 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 23 18:56:25.109711 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Jan 23 18:56:25.109728 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:56:25.109748 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 23 18:56:25.109765 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 18:56:25.109782 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:56:25.109812 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:56:25.109829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:56:25.109855 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:56:25.109873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:56:25.109889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:56:25.109905 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:56:25.109927 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:56:25.109943 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:56:25.109960 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:56:25.109976 kernel: CPU topo: Max. threads per core: 2 Jan 23 18:56:25.109993 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:56:25.110009 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:56:25.110025 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:56:25.110042 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 23 18:56:25.110058 kernel: Booting paravirtualized kernel on KVM Jan 23 18:56:25.110075 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:56:25.110096 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:56:25.110113 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:56:25.110129 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:56:25.110145 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:56:25.110161 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:56:25.110178 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:56:25.110197 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:25.110214 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 18:56:25.110233 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:56:25.110249 kernel: Fallback order for Node 0: 0 Jan 23 18:56:25.110266 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Jan 23 18:56:25.110282 kernel: Policy zone: Normal Jan 23 18:56:25.110299 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:56:25.110315 kernel: software IO TLB: area num 2. Jan 23 18:56:25.110346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:56:25.110367 kernel: Kernel/User page tables isolation: enabled Jan 23 18:56:25.110385 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:56:25.110402 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:56:25.110419 kernel: Dynamic Preempt: voluntary Jan 23 18:56:25.110436 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:56:25.110460 kernel: rcu: RCU event tracing is enabled. Jan 23 18:56:25.110479 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:56:25.110499 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:56:25.110517 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:56:25.110536 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:56:25.110560 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:56:25.110579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:56:25.110597 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:56:25.110615 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:56:25.110634 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:56:25.110661 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:56:25.110680 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:56:25.110697 kernel: Console: colour dummy device 80x25 Jan 23 18:56:25.110715 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:56:25.110740 kernel: ACPI: Core revision 20240827 Jan 23 18:56:25.110759 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:56:25.110780 kernel: x2apic enabled Jan 23 18:56:25.110815 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:56:25.110856 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 23 18:56:25.110875 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 23 18:56:25.110892 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 23 18:56:25.110910 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 23 18:56:25.110928 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 23 18:56:25.110950 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:56:25.110968 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 18:56:25.110984 kernel: Spectre V2 : Mitigation: IBRS Jan 23 18:56:25.111003 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:56:25.111020 kernel: RETBleed: Mitigation: IBRS Jan 23 18:56:25.111038 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:56:25.111056 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 23 18:56:25.111074 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:56:25.111097 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 18:56:25.111116 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:56:25.111135 kernel: active return thunk: its_return_thunk Jan 23 18:56:25.111154 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:56:25.111172 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:56:25.111191 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:56:25.111210 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:56:25.111228 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:56:25.111247 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 18:56:25.111269 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:56:25.111287 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:56:25.111306 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:56:25.111324 kernel: landlock: Up and running. Jan 23 18:56:25.111343 kernel: SELinux: Initializing. Jan 23 18:56:25.111361 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:56:25.111381 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:56:25.111399 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 23 18:56:25.111418 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 23 18:56:25.111440 kernel: signal: max sigframe size: 1776 Jan 23 18:56:25.111459 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:56:25.111479 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:56:25.111498 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:56:25.111517 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:56:25.111535 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:56:25.111554 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:56:25.111573 kernel: .... node #0, CPUs: #1 Jan 23 18:56:25.111592 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 18:56:25.111615 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 18:56:25.111634 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:56:25.111660 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 23 18:56:25.111679 kernel: Memory: 7555812K/7860544K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 298900K reserved, 0K cma-reserved) Jan 23 18:56:25.111698 kernel: devtmpfs: initialized Jan 23 18:56:25.111717 kernel: x86/mm: Memory block size: 128MB Jan 23 18:56:25.111735 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 23 18:56:25.111754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:56:25.111776 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:56:25.111795 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:56:25.111837 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:56:25.111853 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:56:25.111869 kernel: audit: type=2000 audit(1769194580.192:1): state=initialized audit_enabled=0 res=1 Jan 23 18:56:25.111884 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:56:25.111902 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:56:25.111921 kernel: cpuidle: using governor menu Jan 23 18:56:25.111940 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:56:25.111964 kernel: dca service started, version 1.12.1 Jan 23 18:56:25.111983 kernel: PCI: Using configuration type 1 for base access Jan 23 18:56:25.112003 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:56:25.112023 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:56:25.112042 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:56:25.112061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:56:25.112080 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:56:25.112098 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:56:25.112117 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:56:25.112139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:56:25.112157 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 18:56:25.112175 kernel: ACPI: Interpreter enabled Jan 23 18:56:25.112193 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:56:25.112212 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:56:25.112231 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:56:25.112250 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 18:56:25.112268 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 23 18:56:25.112287 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:56:25.112543 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:56:25.112748 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 18:56:25.112963 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 18:56:25.112988 kernel: PCI host bridge to bus 0000:00 Jan 23 18:56:25.113167 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:56:25.113336 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:56:25.113511 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:56:25.113684 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 23 18:56:25.113869 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:56:25.114081 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:56:25.114277 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:56:25.114477 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 23 18:56:25.114676 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 18:56:25.114901 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jan 23 18:56:25.115082 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 18:56:25.115260 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jan 23 18:56:25.115449 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:56:25.115628 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jan 23 18:56:25.115830 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jan 23 18:56:25.116062 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:56:25.116249 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jan 23 18:56:25.116434 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jan 23 18:56:25.116458 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:56:25.116477 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:56:25.116496 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:56:25.116515 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:56:25.116533 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 18:56:25.116559 kernel: iommu: Default domain type: Translated Jan 23 18:56:25.116577 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:56:25.116596 kernel: efivars: Registered efivars operations Jan 23 18:56:25.116614 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:56:25.116633 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:56:25.116659 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 23 18:56:25.116678 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 23 18:56:25.116696 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Jan 23 18:56:25.116715 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 23 18:56:25.116736 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 23 18:56:25.116755 kernel: vgaarb: loaded Jan 23 18:56:25.116773 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:56:25.116792 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:56:25.116835 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:56:25.116853 kernel: pnp: PnP ACPI init Jan 23 18:56:25.116872 kernel: pnp: PnP ACPI: found 7 devices Jan 23 18:56:25.116891 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:56:25.116909 kernel: NET: Registered PF_INET protocol family Jan 23 18:56:25.116933 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 18:56:25.116952 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 18:56:25.116971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:56:25.116989 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:56:25.117006 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 18:56:25.117025 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 18:56:25.117043 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:56:25.117062 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:56:25.117080 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:56:25.117103 kernel: NET: Registered PF_XDP protocol family Jan 23 18:56:25.117287 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:56:25.117457 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:56:25.117629 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:56:25.117831 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 23 18:56:25.118030 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 18:56:25.118054 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:56:25.118079 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:56:25.118097 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 23 18:56:25.118114 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 18:56:25.118132 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 23 18:56:25.118150 kernel: clocksource: Switched to clocksource tsc Jan 23 18:56:25.118170 kernel: Initialise system trusted keyrings Jan 23 18:56:25.118188 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 18:56:25.118207 kernel: Key type asymmetric registered Jan 23 18:56:25.118226 kernel: Asymmetric key parser 'x509' registered Jan 23 18:56:25.118249 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:56:25.118268 kernel: io scheduler mq-deadline registered Jan 23 18:56:25.118287 kernel: io scheduler kyber registered Jan 23 18:56:25.118307 kernel: io scheduler bfq registered Jan 23 18:56:25.118325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:56:25.118344 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 23 18:56:25.118563 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 23 18:56:25.118587 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 23 18:56:25.118781 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 23 18:56:25.118831 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 23 18:56:25.119034 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 23 18:56:25.119058 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:56:25.119077 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:56:25.119095 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 18:56:25.119113 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 23 18:56:25.119131 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 23 18:56:25.119335 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 23 18:56:25.119365 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:56:25.119384 kernel: i8042: Warning: Keylock active Jan 23 18:56:25.119403 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:56:25.119421 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:56:25.119615 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 18:56:25.119854 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 18:56:25.120036 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T18:56:24 UTC (1769194584) Jan 23 18:56:25.120217 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 18:56:25.120245 kernel: intel_pstate: CPU model not supported Jan 23 18:56:25.120265 kernel: pstore: Using crash dump compression: deflate Jan 23 18:56:25.120284 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:56:25.120301 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:56:25.120316 kernel: Segment Routing with IPv6 Jan 23 18:56:25.120334 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:56:25.120353 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:56:25.120371 kernel: Key type dns_resolver registered Jan 23 18:56:25.120390 kernel: IPI shorthand broadcast: enabled Jan 23 18:56:25.120413 kernel: sched_clock: Marking stable (3913004525, 961785935)->(5197035812, -322245352) Jan 23 18:56:25.120433 kernel: registered taskstats version 1 Jan 23 18:56:25.120452 kernel: Loading compiled-in X.509 certificates Jan 23 18:56:25.120470 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:56:25.120488 kernel: Demotion targets for Node 0: null Jan 23 18:56:25.120505 kernel: Key type .fscrypt registered Jan 23 18:56:25.120524 kernel: Key type fscrypt-provisioning registered Jan 23 18:56:25.120542 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:56:25.120561 kernel: ima: No architecture policies found Jan 23 18:56:25.120583 kernel: clk: Disabling unused clocks Jan 23 18:56:25.120602 kernel: Warning: unable to open an initial console. Jan 23 18:56:25.120621 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:56:25.120641 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:56:25.120669 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:56:25.120689 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:56:25.120708 kernel: Run /init as init process Jan 23 18:56:25.120726 kernel: with arguments: Jan 23 18:56:25.120745 kernel: /init Jan 23 18:56:25.120767 kernel: with environment: Jan 23 18:56:25.120785 kernel: HOME=/ Jan 23 18:56:25.120831 kernel: TERM=linux Jan 23 18:56:25.120853 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:56:25.120877 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:25.120898 systemd[1]: Detected virtualization google. Jan 23 18:56:25.120915 systemd[1]: Detected architecture x86-64. Jan 23 18:56:25.120939 systemd[1]: Running in initrd. Jan 23 18:56:25.120958 systemd[1]: No hostname configured, using default hostname. Jan 23 18:56:25.120978 systemd[1]: Hostname set to . Jan 23 18:56:25.120999 systemd[1]: Initializing machine ID from random generator. Jan 23 18:56:25.121018 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:56:25.121038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:25.121077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:25.121102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:56:25.121123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:25.121144 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:56:25.121167 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:56:25.121189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:56:25.121210 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:56:25.121235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:25.121255 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:25.121276 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:56:25.121296 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:25.121317 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:25.121337 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:56:25.121358 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:25.121379 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:25.121403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:56:25.121424 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:56:25.121445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:25.121465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:25.121487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:25.121507 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:56:25.121528 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:56:25.121549 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:25.121570 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:56:25.121596 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:56:25.121617 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:56:25.121638 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:25.121669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:25.121690 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:25.121757 systemd-journald[192]: Collecting audit messages is disabled. Jan 23 18:56:25.121821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:25.121840 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:25.121864 systemd-journald[192]: Journal started Jan 23 18:56:25.121901 systemd-journald[192]: Runtime Journal (/run/log/journal/a2fb05523a6d4473be44797396138e81) is 8M, max 148.6M, 140.6M free. Jan 23 18:56:25.126860 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:56:25.129960 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:25.133855 systemd-modules-load[194]: Inserted module 'overlay' Jan 23 18:56:25.138270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:56:25.145282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:56:25.179480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:25.188359 systemd-tmpfiles[203]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:56:25.194960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:56:25.192643 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:56:25.202105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:56:25.207076 kernel: Bridge firewalling registered Jan 23 18:56:25.204270 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 23 18:56:25.210486 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:25.225970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:25.229906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:25.232700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:25.257794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:25.265238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:25.265835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:25.273012 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:56:25.290000 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:56:25.309720 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:25.361242 systemd-resolved[231]: Positive Trust Anchors: Jan 23 18:56:25.361597 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:56:25.361670 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:56:25.366258 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 23 18:56:25.368051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:56:25.379066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:25.441853 kernel: SCSI subsystem initialized Jan 23 18:56:25.455829 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:56:25.466843 kernel: iscsi: registered transport (tcp) Jan 23 18:56:25.492169 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:56:25.492266 kernel: QLogic iSCSI HBA Driver Jan 23 18:56:25.515914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:25.535325 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:25.536771 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:25.605108 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:25.608507 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:56:25.669847 kernel: raid6: avx2x4 gen() 17869 MB/s Jan 23 18:56:25.686843 kernel: raid6: avx2x2 gen() 17960 MB/s Jan 23 18:56:25.704653 kernel: raid6: avx2x1 gen() 13586 MB/s Jan 23 18:56:25.704749 kernel: raid6: using algorithm avx2x2 gen() 17960 MB/s Jan 23 18:56:25.722328 kernel: raid6: .... xor() 18439 MB/s, rmw enabled Jan 23 18:56:25.722388 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:56:25.744841 kernel: xor: automatically using best checksumming function avx Jan 23 18:56:25.929857 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:56:25.939597 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:25.942994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:25.979043 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 23 18:56:25.988926 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:25.994725 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:56:26.027786 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 23 18:56:26.061506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:26.063588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:26.156527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:26.164313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:56:26.258857 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jan 23 18:56:26.270844 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:56:26.286750 kernel: scsi host0: Virtio SCSI HBA Jan 23 18:56:26.286869 kernel: blk-mq: reduced tag depth to 10240 Jan 23 18:56:26.298860 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 23 18:56:26.314186 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 18:56:26.376922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:26.383946 kernel: AES CTR mode by8 optimization enabled Jan 23 18:56:26.377147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:26.381283 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:26.394154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:26.406825 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 23 18:56:26.407169 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 23 18:56:26.407432 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 23 18:56:26.408541 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:26.413005 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 23 18:56:26.417845 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 18:56:26.431076 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:56:26.431145 kernel: GPT:17805311 != 33554431 Jan 23 18:56:26.431170 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:56:26.432093 kernel: GPT:17805311 != 33554431 Jan 23 18:56:26.433321 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:56:26.433349 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:56:26.435119 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 23 18:56:26.462188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:26.521441 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 23 18:56:26.548381 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:26.565696 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 23 18:56:26.581084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 23 18:56:26.591855 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 23 18:56:26.592113 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 23 18:56:26.600021 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:26.604926 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:26.608938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:26.615364 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:56:26.622000 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:56:26.639077 disk-uuid[593]: Primary Header is updated. Jan 23 18:56:26.639077 disk-uuid[593]: Secondary Entries is updated. Jan 23 18:56:26.639077 disk-uuid[593]: Secondary Header is updated. Jan 23 18:56:26.654901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:56:26.660125 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:27.686855 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:56:27.687650 disk-uuid[594]: The operation has completed successfully. Jan 23 18:56:27.770042 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:56:27.770204 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:56:27.814858 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:56:27.848255 sh[615]: Success Jan 23 18:56:27.871853 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:56:27.871944 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:56:27.871990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:56:27.885826 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 18:56:27.972611 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:56:27.978086 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:56:27.996107 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:56:28.014838 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (627) Jan 23 18:56:28.017483 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:56:28.017539 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:28.039681 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:56:28.039777 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:56:28.039818 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:56:28.044312 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:56:28.045229 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:28.048200 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:56:28.051182 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:56:28.059374 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:56:28.101832 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (661) Jan 23 18:56:28.104674 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:28.104740 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:28.112554 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:56:28.112634 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:56:28.112661 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:56:28.119919 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:28.122099 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:56:28.129479 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:56:28.228718 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:28.242111 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:56:28.376453 systemd-networkd[796]: lo: Link UP Jan 23 18:56:28.376900 systemd-networkd[796]: lo: Gained carrier Jan 23 18:56:28.380613 systemd-networkd[796]: Enumeration completed Jan 23 18:56:28.381333 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:56:28.382488 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:28.390958 ignition[723]: Ignition 2.22.0 Jan 23 18:56:28.382496 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:28.390966 ignition[723]: Stage: fetch-offline Jan 23 18:56:28.384431 systemd-networkd[796]: eth0: Link UP Jan 23 18:56:28.391004 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:28.384636 systemd-networkd[796]: eth0: Gained carrier Jan 23 18:56:28.391015 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:28.384653 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:28.391136 ignition[723]: parsed url from cmdline: "" Jan 23 18:56:28.388061 systemd[1]: Reached target network.target - Network. Jan 23 18:56:28.391141 ignition[723]: no config URL provided Jan 23 18:56:28.393586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:28.391147 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:56:28.393880 systemd-networkd[796]: eth0: DHCPv4 address 10.128.0.7/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 23 18:56:28.391156 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:56:28.403961 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:56:28.391164 ignition[723]: failed to fetch config: resource requires networking Jan 23 18:56:28.391432 ignition[723]: Ignition finished successfully Jan 23 18:56:28.446197 ignition[806]: Ignition 2.22.0 Jan 23 18:56:28.457719 unknown[806]: fetched base config from "system" Jan 23 18:56:28.446205 ignition[806]: Stage: fetch Jan 23 18:56:28.457731 unknown[806]: fetched base config from "system" Jan 23 18:56:28.446362 ignition[806]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:28.457742 unknown[806]: fetched user config from "gcp" Jan 23 18:56:28.446379 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:28.461133 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:56:28.446475 ignition[806]: parsed url from cmdline: "" Jan 23 18:56:28.465837 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:56:28.446480 ignition[806]: no config URL provided Jan 23 18:56:28.446487 ignition[806]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:56:28.446496 ignition[806]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:56:28.446532 ignition[806]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 23 18:56:28.452242 ignition[806]: GET result: OK Jan 23 18:56:28.452977 ignition[806]: parsing config with SHA512: 6b4c9899a557b4e3fd2122d56f7a2b69bfb196158048d792ab2d6966815f995ebe1f3b909df9d1204c5f9c428d311386d010676b8414a3ce5ebe9e159cefdb60 Jan 23 18:56:28.458376 ignition[806]: fetch: fetch complete Jan 23 18:56:28.458387 ignition[806]: fetch: fetch passed Jan 23 18:56:28.458489 ignition[806]: Ignition finished successfully Jan 23 18:56:28.515780 ignition[812]: Ignition 2.22.0 Jan 23 18:56:28.515788 ignition[812]: Stage: kargs Jan 23 18:56:28.519383 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:56:28.515997 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:28.524474 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:56:28.516009 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:28.516969 ignition[812]: kargs: kargs passed Jan 23 18:56:28.517047 ignition[812]: Ignition finished successfully Jan 23 18:56:28.571414 ignition[818]: Ignition 2.22.0 Jan 23 18:56:28.571432 ignition[818]: Stage: disks Jan 23 18:56:28.571658 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:28.575298 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:56:28.571674 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:28.576519 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:28.572937 ignition[818]: disks: disks passed Jan 23 18:56:28.582958 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:56:28.573009 ignition[818]: Ignition finished successfully Jan 23 18:56:28.587122 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:28.590152 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:56:28.595204 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:56:28.602007 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:56:28.640723 systemd-fsck[827]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 18:56:28.653495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:56:28.660340 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:56:28.833871 kernel: EXT4-fs (sda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:56:28.834530 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:56:28.838047 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:28.842183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:28.856905 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:56:28.861552 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:56:28.861643 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:56:28.861685 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:28.878119 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (835) Jan 23 18:56:28.878174 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:28.879048 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:28.887548 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:56:28.887623 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:56:28.887646 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:56:28.889905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:28.890315 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:56:28.899658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:56:29.025018 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:56:29.034934 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:56:29.042301 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:56:29.048677 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:56:29.214307 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:29.217648 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:56:29.224386 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:56:29.242344 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:56:29.245210 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:29.278304 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:56:29.289961 ignition[947]: INFO : Ignition 2.22.0 Jan 23 18:56:29.289961 ignition[947]: INFO : Stage: mount Jan 23 18:56:29.294978 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:29.294978 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:29.294978 ignition[947]: INFO : mount: mount passed Jan 23 18:56:29.294978 ignition[947]: INFO : Ignition finished successfully Jan 23 18:56:29.294905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:56:29.297392 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:56:29.331569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:29.362854 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (959) Jan 23 18:56:29.365535 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:29.365605 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:29.373386 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:56:29.373470 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:56:29.373495 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:56:29.376954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:29.420639 ignition[976]: INFO : Ignition 2.22.0 Jan 23 18:56:29.420639 ignition[976]: INFO : Stage: files Jan 23 18:56:29.425378 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:29.425378 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:29.425378 ignition[976]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:56:29.425378 ignition[976]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:56:29.425378 ignition[976]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:56:29.441968 ignition[976]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:56:29.441968 ignition[976]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:56:29.441968 ignition[976]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:56:29.441968 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:29.441968 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:56:29.430522 unknown[976]: wrote ssh authorized keys file for user: core Jan 23 18:56:29.551317 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:56:30.254988 systemd-networkd[796]: eth0: Gained IPv6LL Jan 23 18:56:30.489899 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:30.495033 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:30.539966 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:30.539966 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:30.539966 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:56:30.909204 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:56:31.640188 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:31.640188 ignition[976]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:56:31.650088 ignition[976]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:31.650088 ignition[976]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:31.650088 ignition[976]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:56:31.650088 ignition[976]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:31.650088 ignition[976]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:31.650088 ignition[976]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:31.650088 ignition[976]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:31.650088 ignition[976]: INFO : files: files passed Jan 23 18:56:31.650088 ignition[976]: INFO : Ignition finished successfully Jan 23 18:56:31.650242 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:56:31.658601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:56:31.665364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:56:31.708055 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:31.708055 initrd-setup-root-after-ignition[1006]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:31.690397 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:56:31.720974 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:31.690523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:56:31.710038 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:31.711452 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:56:31.718247 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:56:31.795925 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:56:31.796259 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:56:31.801794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:56:31.805163 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:56:31.809352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:56:31.811750 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:56:31.846106 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:31.856151 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:56:31.889910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:31.893243 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:31.896583 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:56:31.906195 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:56:31.906641 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:31.915342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:56:31.921188 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:56:31.921605 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:56:31.928161 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:31.934272 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:31.937650 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:31.942528 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:56:31.947475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:31.951600 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:56:31.956571 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:56:31.961563 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:56:31.970130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:56:31.970546 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:31.977407 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:31.977830 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:31.982439 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:56:31.982877 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:31.986416 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:56:31.986930 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:31.994781 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:56:31.995119 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:32.001221 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:56:32.001451 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:56:32.006938 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:56:32.015049 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:56:32.015366 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:32.026232 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:56:32.031974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:56:32.032382 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:32.036978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:56:32.037507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:32.061209 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:56:32.061401 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:56:32.073005 ignition[1030]: INFO : Ignition 2.22.0 Jan 23 18:56:32.073005 ignition[1030]: INFO : Stage: umount Jan 23 18:56:32.073005 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:32.073005 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 18:56:32.073005 ignition[1030]: INFO : umount: umount passed Jan 23 18:56:32.073005 ignition[1030]: INFO : Ignition finished successfully Jan 23 18:56:32.077933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:56:32.078913 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:56:32.079085 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:56:32.086339 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:56:32.086433 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:56:32.089040 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:56:32.089127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:56:32.095033 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:56:32.095120 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:56:32.101057 systemd[1]: Stopped target network.target - Network. Jan 23 18:56:32.104973 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:56:32.105086 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:32.111028 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:56:32.114957 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:56:32.119031 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:32.121211 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:56:32.126352 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:56:32.130253 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:56:32.130482 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:32.134200 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:56:32.134256 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:32.138330 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:56:32.138459 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:56:32.142276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:56:32.142369 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:32.147566 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:56:32.151566 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:56:32.159039 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:56:32.159428 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:56:32.171329 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:56:32.171611 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:56:32.171737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:56:32.173934 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:56:32.174322 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:56:32.174460 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:56:32.178660 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:56:32.183002 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:56:32.183089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:32.189987 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:56:32.190103 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:32.198968 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:56:32.207916 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:56:32.208031 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:32.211006 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:56:32.211095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:32.218196 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:56:32.218283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:32.221377 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:56:32.221584 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:32.229266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:32.241742 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:56:32.241893 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:32.242431 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:56:32.242596 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:32.248479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:56:32.248626 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:32.258110 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:56:32.258180 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:32.265129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:56:32.265328 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:32.274964 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:56:32.275085 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:32.281977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:56:32.282095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:32.290846 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:56:32.300944 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:56:32.301116 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:32.307438 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:56:32.307525 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:32.322156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:32.322384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:32.326647 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:56:32.326742 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:56:32.326817 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:32.327469 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:56:32.327614 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:56:32.333451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:56:32.333596 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:56:32.429012 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 23 18:56:32.340091 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:56:32.343761 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:56:32.368021 systemd[1]: Switching root. Jan 23 18:56:32.435953 systemd-journald[192]: Journal stopped Jan 23 18:56:34.507938 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:56:34.508000 kernel: SELinux: policy capability open_perms=1 Jan 23 18:56:34.508029 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:56:34.508046 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:56:34.508063 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:56:34.508080 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:56:34.508100 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:56:34.508117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:56:34.508140 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:56:34.508159 kernel: audit: type=1403 audit(1769194593.044:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:56:34.508191 systemd[1]: Successfully loaded SELinux policy in 70.701ms. Jan 23 18:56:34.508212 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.066ms. Jan 23 18:56:34.508234 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:34.508253 systemd[1]: Detected virtualization google. Jan 23 18:56:34.508279 systemd[1]: Detected architecture x86-64. Jan 23 18:56:34.508300 systemd[1]: Detected first boot. Jan 23 18:56:34.508322 systemd[1]: Initializing machine ID from random generator. Jan 23 18:56:34.508342 zram_generator::config[1073]: No configuration found. Jan 23 18:56:34.508363 kernel: Guest personality initialized and is inactive Jan 23 18:56:34.508383 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:56:34.508407 kernel: Initialized host personality Jan 23 18:56:34.508428 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:56:34.508448 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:56:34.508471 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:56:34.508496 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:56:34.508518 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:56:34.508539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:56:34.508565 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:56:34.508587 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:56:34.508608 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:56:34.508629 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:56:34.508649 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:56:34.508672 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:56:34.508695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:56:34.508721 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:56:34.508742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:34.508764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:34.508787 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:56:34.510412 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:56:34.510451 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:56:34.510484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:34.510507 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:56:34.510529 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:34.510555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:34.510577 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:56:34.510602 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:56:34.510624 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:34.510646 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:56:34.510668 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:34.510690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:34.510716 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:34.510738 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:34.510760 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:56:34.510782 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:56:34.510822 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:56:34.510846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:34.510873 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:34.510894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:34.510916 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:56:34.510939 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:56:34.510961 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:56:34.510983 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:56:34.511005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:34.511032 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:56:34.511054 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:56:34.511076 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:56:34.511101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:56:34.511123 systemd[1]: Reached target machines.target - Containers. Jan 23 18:56:34.511145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:56:34.511174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:56:34.511196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:34.511222 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:56:34.511244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:56:34.511264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:56:34.511284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:56:34.511303 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:56:34.511324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:56:34.511346 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:56:34.511366 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:56:34.511388 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:56:34.511416 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:56:34.511438 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:56:34.511461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:56:34.511483 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:34.511503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:34.511524 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:34.511544 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:56:34.511568 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:56:34.511594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:34.512367 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:56:34.512423 systemd[1]: Stopped verity-setup.service. Jan 23 18:56:34.512445 kernel: fuse: init (API version 7.41) Jan 23 18:56:34.512468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:34.512489 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:56:34.512511 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:56:34.512533 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:56:34.512560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:56:34.512582 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:56:34.512603 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:56:34.512624 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:34.512645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:56:34.512665 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:56:34.512686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:56:34.512706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:56:34.512726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:56:34.512751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:56:34.512774 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:56:34.512794 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:56:34.512867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:34.512892 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:34.512915 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:56:34.512938 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:34.513011 systemd-journald[1140]: Collecting audit messages is disabled. Jan 23 18:56:34.513071 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:56:34.513096 kernel: loop: module loaded Jan 23 18:56:34.513118 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:56:34.513143 systemd-journald[1140]: Journal started Jan 23 18:56:34.513201 systemd-journald[1140]: Runtime Journal (/run/log/journal/984a046ca4af46d9a0915c400767d732) is 8M, max 148.6M, 140.6M free. Jan 23 18:56:33.957950 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:56:33.979884 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 18:56:33.980504 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:56:34.523826 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:56:34.523920 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:34.532139 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:56:34.549565 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:56:34.553839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:56:34.561293 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:56:34.566622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:56:34.579849 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:56:34.592852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:34.600838 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:56:34.606835 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:34.613129 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:56:34.631357 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:56:34.637878 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:56:34.645062 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:56:34.649200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:56:34.672006 kernel: ACPI: bus type drm_connector registered Jan 23 18:56:34.678294 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:56:34.678621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:56:34.683636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:34.720046 kernel: loop0: detected capacity change from 0 to 229808 Jan 23 18:56:34.705304 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:56:34.712547 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:56:34.737334 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:56:34.752087 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:56:34.763085 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:56:34.765635 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:56:34.768313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:56:34.813507 systemd-journald[1140]: Time spent on flushing to /var/log/journal/984a046ca4af46d9a0915c400767d732 is 107.810ms for 964 entries. Jan 23 18:56:34.813507 systemd-journald[1140]: System Journal (/var/log/journal/984a046ca4af46d9a0915c400767d732) is 8M, max 584.8M, 576.8M free. Jan 23 18:56:34.958775 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:56:34.958868 systemd-journald[1140]: Received client request to flush runtime journal. Jan 23 18:56:34.958921 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 18:56:34.958947 kernel: loop2: detected capacity change from 0 to 50736 Jan 23 18:56:34.881482 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:56:34.917893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:34.945698 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:56:34.955253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:34.964759 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:56:34.983348 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:56:35.018752 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 23 18:56:35.018787 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 23 18:56:35.051522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:35.054950 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 18:56:35.139833 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 18:56:35.187865 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 18:56:35.232275 kernel: loop6: detected capacity change from 0 to 50736 Jan 23 18:56:35.266235 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 18:56:35.313556 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 23 18:56:35.316321 (sd-merge)[1221]: Merged extensions into '/usr'. Jan 23 18:56:35.327388 systemd[1]: Reload requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:56:35.327417 systemd[1]: Reloading... Jan 23 18:56:35.547862 zram_generator::config[1247]: No configuration found. Jan 23 18:56:35.805052 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:56:36.038645 systemd[1]: Reloading finished in 708 ms. Jan 23 18:56:36.053209 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:56:36.057467 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:56:36.061417 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:56:36.084714 systemd[1]: Starting ensure-sysext.service... Jan 23 18:56:36.091039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:56:36.100210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:36.128138 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:56:36.128297 systemd[1]: Reloading... Jan 23 18:56:36.150953 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:56:36.154106 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:56:36.154639 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:56:36.157389 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:56:36.164860 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:56:36.165457 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 23 18:56:36.165582 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 23 18:56:36.166588 systemd-udevd[1290]: Using default interface naming scheme 'v255'. Jan 23 18:56:36.176316 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:56:36.176336 systemd-tmpfiles[1289]: Skipping /boot Jan 23 18:56:36.200902 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:56:36.200927 systemd-tmpfiles[1289]: Skipping /boot Jan 23 18:56:36.269854 zram_generator::config[1320]: No configuration found. Jan 23 18:56:36.657843 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:56:36.715841 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 18:56:36.757882 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 18:56:36.780844 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:56:36.816852 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 23 18:56:36.864743 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:56:36.865087 systemd[1]: Reloading finished in 735 ms. Jan 23 18:56:36.880141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:36.907186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:36.939834 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 18:56:36.944840 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:56:36.955594 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jan 23 18:56:36.965146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:36.968924 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:56:36.980326 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:56:36.991218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:56:36.994293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:56:37.008945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:56:37.024990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:56:37.033168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:56:37.033707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:56:37.039925 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:56:37.062317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:56:37.091274 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:56:37.103449 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:56:37.114717 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:37.164793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:56:37.166916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:56:37.188893 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:56:37.207499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:56:37.209882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:56:37.225729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:56:37.226077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:56:37.265385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:37.266580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:56:37.282377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:56:37.296484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:56:37.308082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:56:37.319946 augenrules[1445]: No rules Jan 23 18:56:37.329534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:56:37.340781 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 18:56:37.348074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:56:37.349071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:56:37.349542 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:56:37.366373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:56:37.375941 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:56:37.376326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:37.384428 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:56:37.384783 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:56:37.395231 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:56:37.407144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:56:37.407494 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:56:37.419282 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:56:37.419609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:56:37.430911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:56:37.431278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:56:37.442709 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:56:37.454473 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:56:37.454970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:56:37.503850 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 18:56:37.511841 systemd[1]: Finished ensure-sysext.service. Jan 23 18:56:37.519540 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:56:37.564117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 23 18:56:37.597203 systemd-networkd[1420]: lo: Link UP Jan 23 18:56:37.597222 systemd-networkd[1420]: lo: Gained carrier Jan 23 18:56:37.599780 systemd-networkd[1420]: Enumeration completed Jan 23 18:56:37.600414 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:37.600423 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:37.600885 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 23 18:56:37.603396 systemd-networkd[1420]: eth0: Link UP Jan 23 18:56:37.603695 systemd-networkd[1420]: eth0: Gained carrier Jan 23 18:56:37.603737 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:37.605208 systemd-resolved[1421]: Positive Trust Anchors: Jan 23 18:56:37.605634 systemd-resolved[1421]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:56:37.605710 systemd-resolved[1421]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:56:37.613069 systemd-resolved[1421]: Defaulting to hostname 'linux'. Jan 23 18:56:37.613168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:56:37.615897 systemd-networkd[1420]: eth0: DHCPv4 address 10.128.0.7/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 23 18:56:37.622988 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:56:37.623251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:56:37.627142 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:56:37.639632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:37.640097 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:56:37.640343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:56:37.640588 systemd[1]: Reached target network.target - Network. Jan 23 18:56:37.640652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:37.644288 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:56:37.648094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:56:37.680096 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:56:37.710315 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 23 18:56:37.710932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:56:37.711349 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:56:37.792035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:37.802333 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:56:37.811182 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:56:37.822077 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:56:37.833025 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:56:37.843247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:56:37.852189 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:56:37.862026 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:56:37.872019 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:56:37.872093 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:56:37.880019 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:56:37.890537 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:56:37.901818 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:56:37.911296 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:56:37.922281 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:56:37.933035 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:56:37.952850 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:56:37.962508 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:56:37.973996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:56:37.984185 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:56:37.993017 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:56:38.001064 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:56:38.001127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:56:38.002631 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:56:38.021012 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:56:38.041016 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:56:38.053103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:56:38.078092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:56:38.096107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:56:38.104970 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:56:38.108901 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:56:38.109266 jq[1503]: false Jan 23 18:56:38.112832 coreos-metadata[1500]: Jan 23 18:56:38.111 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 23 18:56:38.114256 coreos-metadata[1500]: Jan 23 18:56:38.114 INFO Fetch successful Jan 23 18:56:38.114586 coreos-metadata[1500]: Jan 23 18:56:38.114 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 23 18:56:38.115167 coreos-metadata[1500]: Jan 23 18:56:38.115 INFO Fetch successful Jan 23 18:56:38.115323 coreos-metadata[1500]: Jan 23 18:56:38.115 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 23 18:56:38.115544 coreos-metadata[1500]: Jan 23 18:56:38.115 INFO Fetch successful Jan 23 18:56:38.115694 coreos-metadata[1500]: Jan 23 18:56:38.115 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 23 18:56:38.116189 coreos-metadata[1500]: Jan 23 18:56:38.116 INFO Fetch successful Jan 23 18:56:38.122037 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:56:38.134582 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 18:56:38.137657 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jan 23 18:56:38.137675 oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jan 23 18:56:38.145955 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:56:38.150563 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting Jan 23 18:56:38.150563 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:56:38.150718 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache Jan 23 18:56:38.150558 oslogin_cache_refresh[1507]: Failure getting users, quitting Jan 23 18:56:38.150587 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:56:38.150663 oslogin_cache_refresh[1507]: Refreshing group entry cache Jan 23 18:56:38.151466 extend-filesystems[1506]: Found /dev/sda6 Jan 23 18:56:38.161593 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting Jan 23 18:56:38.161593 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:56:38.157965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:56:38.152307 oslogin_cache_refresh[1507]: Failure getting groups, quitting Jan 23 18:56:38.152324 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:56:38.167256 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:56:38.167431 extend-filesystems[1506]: Found /dev/sda9 Jan 23 18:56:38.188978 extend-filesystems[1506]: Checking size of /dev/sda9 Jan 23 18:56:38.189841 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:56:38.199031 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 23 18:56:38.202738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:56:38.205932 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:56:38.215795 extend-filesystems[1506]: Resized partition /dev/sda9 Jan 23 18:56:38.224010 extend-filesystems[1532]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:56:38.254042 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 23 18:56:38.220031 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:56:38.250900 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:56:38.261093 ntpd[1512]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 18:56:38.261187 ntpd[1512]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: ---------------------------------------------------- Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: ntp-4 is maintained by Network Time Foundation, Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: corporation. Support and training for ntp-4 are Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: available at https://www.nwtime.org/support Jan 23 18:56:38.261788 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: ---------------------------------------------------- Jan 23 18:56:38.261203 ntpd[1512]: ---------------------------------------------------- Jan 23 18:56:38.261217 ntpd[1512]: ntp-4 is maintained by Network Time Foundation, Jan 23 18:56:38.261230 ntpd[1512]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 18:56:38.261245 ntpd[1512]: corporation. Support and training for ntp-4 are Jan 23 18:56:38.261258 ntpd[1512]: available at https://www.nwtime.org/support Jan 23 18:56:38.261272 ntpd[1512]: ---------------------------------------------------- Jan 23 18:56:38.267542 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:56:38.268022 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:56:38.268509 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:56:38.270211 ntpd[1512]: proto: precision = 0.108 usec (-23) Jan 23 18:56:38.270364 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: proto: precision = 0.108 usec (-23) Jan 23 18:56:38.274086 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:56:38.278789 ntpd[1512]: basedate set to 2026-01-11 Jan 23 18:56:38.279293 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: basedate set to 2026-01-11 Jan 23 18:56:38.279293 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: gps base set to 2026-01-11 (week 2401) Jan 23 18:56:38.279293 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 18:56:38.279293 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 18:56:38.279042 ntpd[1512]: gps base set to 2026-01-11 (week 2401) Jan 23 18:56:38.279231 ntpd[1512]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 18:56:38.279273 ntpd[1512]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 18:56:38.279791 ntpd[1512]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 18:56:38.287262 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 18:56:38.287262 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Listen normally on 3 eth0 10.128.0.7:123 Jan 23 18:56:38.287262 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: Listen normally on 4 lo [::1]:123 Jan 23 18:56:38.287262 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: bind(21) AF_INET6 [fe80::4001:aff:fe80:7%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 18:56:38.287262 ntpd[1512]: 23 Jan 18:56:38 ntpd[1512]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:7%2]:123 Jan 23 18:56:38.279911 ntpd[1512]: Listen normally on 3 eth0 10.128.0.7:123 Jan 23 18:56:38.279957 ntpd[1512]: Listen normally on 4 lo [::1]:123 Jan 23 18:56:38.280014 ntpd[1512]: bind(21) AF_INET6 [fe80::4001:aff:fe80:7%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 18:56:38.280047 ntpd[1512]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:7%2]:123 Jan 23 18:56:38.292393 jq[1530]: true Jan 23 18:56:38.295402 kernel: ntpd[1512]: segfault at 24 ip 000056509c990aeb sp 00007ffd68281e60 error 4 in ntpd[68aeb,56509c92e000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 18:56:38.295464 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 18:56:38.306096 update_engine[1529]: I20260123 18:56:38.305979 1529 main.cc:92] Flatcar Update Engine starting Jan 23 18:56:38.316602 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:56:38.316977 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:56:38.330664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:56:38.336731 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:56:38.344941 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 23 18:56:38.368012 systemd-coredump[1543]: Process 1512 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 18:56:38.381656 extend-filesystems[1532]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 18:56:38.381656 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 18:56:38.381656 extend-filesystems[1532]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 23 18:56:38.422327 jq[1544]: true Jan 23 18:56:38.386680 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:56:38.422599 extend-filesystems[1506]: Resized filesystem in /dev/sda9 Jan 23 18:56:38.387592 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:56:38.438403 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:56:38.462892 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:56:38.515878 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 18:56:38.527388 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:56:38.531161 systemd[1]: Started systemd-coredump@0-1543-0.service - Process Core Dump (PID 1543/UID 0). Jan 23 18:56:38.606327 tar[1542]: linux-amd64/LICENSE Jan 23 18:56:38.606327 tar[1542]: linux-amd64/helm Jan 23 18:56:38.632730 bash[1580]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:56:38.632902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:56:38.653076 systemd[1]: Starting sshkeys.service... Jan 23 18:56:38.692053 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:56:38.695499 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:56:38.696899 systemd-logind[1526]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 18:56:38.696934 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:56:38.709225 systemd-logind[1526]: New seat seat0. Jan 23 18:56:38.711938 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:56:38.727332 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:56:38.841762 dbus-daemon[1501]: [system] SELinux support is enabled Jan 23 18:56:38.842077 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:56:38.847780 dbus-daemon[1501]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1420 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 18:56:38.858644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:56:38.859546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:56:38.864150 update_engine[1529]: I20260123 18:56:38.863884 1529 update_check_scheduler.cc:74] Next update check in 4m13s Jan 23 18:56:38.870608 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:56:38.870881 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:56:38.884062 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:56:38.890310 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 18:56:38.903399 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 18:56:38.906291 coreos-metadata[1583]: Jan 23 18:56:38.906 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 23 18:56:38.909571 coreos-metadata[1583]: Jan 23 18:56:38.909 INFO Fetch failed with 404: resource not found Jan 23 18:56:38.909571 coreos-metadata[1583]: Jan 23 18:56:38.909 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 23 18:56:38.913433 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:56:38.915098 coreos-metadata[1583]: Jan 23 18:56:38.914 INFO Fetch successful Jan 23 18:56:38.915098 coreos-metadata[1583]: Jan 23 18:56:38.915 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 23 18:56:38.916298 coreos-metadata[1583]: Jan 23 18:56:38.916 INFO Fetch failed with 404: resource not found Jan 23 18:56:38.916298 coreos-metadata[1583]: Jan 23 18:56:38.916 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 23 18:56:38.924849 coreos-metadata[1583]: Jan 23 18:56:38.924 INFO Fetch failed with 404: resource not found Jan 23 18:56:38.924849 coreos-metadata[1583]: Jan 23 18:56:38.924 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 23 18:56:38.928534 coreos-metadata[1583]: Jan 23 18:56:38.925 INFO Fetch successful Jan 23 18:56:38.936906 unknown[1583]: wrote ssh authorized keys file for user: core Jan 23 18:56:38.967877 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:56:39.027575 update-ssh-keys[1594]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:56:39.030093 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:56:39.042339 systemd[1]: Finished sshkeys.service. Jan 23 18:56:39.068138 systemd-coredump[1573]: Process 1512 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1512: #0 0x000056509c990aeb n/a (ntpd + 0x68aeb) #1 0x000056509c939cdf n/a (ntpd + 0x11cdf) #2 0x000056509c93a575 n/a (ntpd + 0x12575) #3 0x000056509c935d8a n/a (ntpd + 0xdd8a) #4 0x000056509c9375d3 n/a (ntpd + 0xf5d3) #5 0x000056509c93ffd1 n/a (ntpd + 0x17fd1) #6 0x000056509c930c2d n/a (ntpd + 0x8c2d) #7 0x00007f6fcc21916c n/a (libc.so.6 + 0x2716c) #8 0x00007f6fcc219229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056509c930c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 18:56:39.073038 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 18:56:39.073278 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 18:56:39.085681 systemd[1]: systemd-coredump@0-1543-0.service: Deactivated successfully. Jan 23 18:56:39.110757 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 18:56:39.129021 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 18:56:39.130363 dbus-daemon[1501]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1591 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 18:56:39.147100 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 18:56:39.160910 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:56:39.178302 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:56:39.187527 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 18:56:39.194989 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 18:56:39.274489 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:56:39.274908 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:56:39.292221 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:56:39.352662 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:56:39.357978 ntpd[1620]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: ---------------------------------------------------- Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: ntp-4 is maintained by Network Time Foundation, Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: corporation. Support and training for ntp-4 are Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: available at https://www.nwtime.org/support Jan 23 18:56:39.361274 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: ---------------------------------------------------- Jan 23 18:56:39.360941 ntpd[1620]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 18:56:39.360963 ntpd[1620]: ---------------------------------------------------- Jan 23 18:56:39.360977 ntpd[1620]: ntp-4 is maintained by Network Time Foundation, Jan 23 18:56:39.360991 ntpd[1620]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 18:56:39.361004 ntpd[1620]: corporation. Support and training for ntp-4 are Jan 23 18:56:39.361018 ntpd[1620]: available at https://www.nwtime.org/support Jan 23 18:56:39.361031 ntpd[1620]: ---------------------------------------------------- Jan 23 18:56:39.368625 systemd[1]: Started sshd@0-10.128.0.7:22-4.153.228.146:59002.service - OpenSSH per-connection server daemon (4.153.228.146:59002). Jan 23 18:56:39.372684 ntpd[1620]: proto: precision = 0.072 usec (-24) Jan 23 18:56:39.373275 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: proto: precision = 0.072 usec (-24) Jan 23 18:56:39.373537 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:56:39.417531 kernel: ntpd[1620]: segfault at 24 ip 000055e251f61aeb sp 00007ffd70c688c0 error 4 in ntpd[68aeb,55e251eff000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 18:56:39.417652 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 18:56:39.380063 ntpd[1620]: basedate set to 2026-01-11 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: basedate set to 2026-01-11 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: gps base set to 2026-01-11 (week 2401) Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Listen normally on 3 eth0 10.128.0.7:123 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: Listen normally on 4 lo [::1]:123 Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: bind(21) AF_INET6 [fe80::4001:aff:fe80:7%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 18:56:39.417861 ntpd[1620]: 23 Jan 18:56:39 ntpd[1620]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:7%2]:123 Jan 23 18:56:39.414589 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:56:39.380088 ntpd[1620]: gps base set to 2026-01-11 (week 2401) Jan 23 18:56:39.380207 ntpd[1620]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 18:56:39.380248 ntpd[1620]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 18:56:39.381239 ntpd[1620]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 18:56:39.381289 ntpd[1620]: Listen normally on 3 eth0 10.128.0.7:123 Jan 23 18:56:39.381332 ntpd[1620]: Listen normally on 4 lo [::1]:123 Jan 23 18:56:39.381373 ntpd[1620]: bind(21) AF_INET6 [fe80::4001:aff:fe80:7%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 18:56:39.381405 ntpd[1620]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:7%2]:123 Jan 23 18:56:39.433729 containerd[1545]: time="2026-01-23T18:56:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:56:39.433729 containerd[1545]: time="2026-01-23T18:56:39.427957888Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:56:39.447534 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.466897802Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.769µs" Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.466952278Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.466981525Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467206637Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467233786Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467270940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467366345Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467386974Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467712624Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467738412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467755708Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468343 containerd[1545]: time="2026-01-23T18:56:39.467769464Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468960 containerd[1545]: time="2026-01-23T18:56:39.467913802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468960 containerd[1545]: time="2026-01-23T18:56:39.468219702Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468960 containerd[1545]: time="2026-01-23T18:56:39.468271001Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:56:39.468960 containerd[1545]: time="2026-01-23T18:56:39.468289864Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:56:39.473104 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:56:39.475038 containerd[1545]: time="2026-01-23T18:56:39.474116450Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:56:39.475038 containerd[1545]: time="2026-01-23T18:56:39.474624680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:56:39.475038 containerd[1545]: time="2026-01-23T18:56:39.474740859Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:56:39.482681 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.487917462Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.487999102Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488023204Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488044043Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488063900Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488095929Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488122680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488142499Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488160735Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488177948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488194835Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488217611Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488401495Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:56:39.489839 containerd[1545]: time="2026-01-23T18:56:39.488437271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488462588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488494445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488517260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488545410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488579793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488598922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488618246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488635608Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488663964Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488732862Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488766900Z" level=info msg="Start snapshots syncer" Jan 23 18:56:39.490451 containerd[1545]: time="2026-01-23T18:56:39.488821161Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:56:39.490964 containerd[1545]: time="2026-01-23T18:56:39.489230293Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:56:39.490964 containerd[1545]: time="2026-01-23T18:56:39.489312431Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489431479Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489591055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489622549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489642897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489659914Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489684623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489731558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489751177Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:56:39.491157 containerd[1545]: time="2026-01-23T18:56:39.489785812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494272247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494323160Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494382686Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494408241Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494423708Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494439635Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494453418Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494468644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494494282Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494518941Z" level=info msg="runtime interface created" Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494528179Z" level=info msg="created NRI interface" Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494544585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494580211Z" level=info msg="Connect containerd service" Jan 23 18:56:39.494845 containerd[1545]: time="2026-01-23T18:56:39.494616288Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:56:39.497455 systemd-coredump[1634]: Process 1620 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 18:56:39.504460 systemd[1]: Started systemd-coredump@1-1634-0.service - Process Core Dump (PID 1634/UID 0). Jan 23 18:56:39.507464 containerd[1545]: time="2026-01-23T18:56:39.507011934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:56:39.547926 systemd-networkd[1420]: eth0: Gained IPv6LL Jan 23 18:56:39.556020 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:56:39.567618 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:56:39.582259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:56:39.596230 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:56:39.612991 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 23 18:56:39.708246 init.sh[1644]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 23 18:56:39.708246 init.sh[1644]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 23 18:56:39.717458 init.sh[1644]: + /usr/bin/google_instance_setup Jan 23 18:56:39.782358 tar[1542]: linux-amd64/README.md Jan 23 18:56:39.807229 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:56:39.816942 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:56:39.822466 polkitd[1615]: Started polkitd version 126 Jan 23 18:56:39.861047 polkitd[1615]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 18:56:39.861781 polkitd[1615]: Loading rules from directory /run/polkit-1/rules.d Jan 23 18:56:39.864656 polkitd[1615]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 18:56:39.869751 polkitd[1615]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 18:56:39.869950 polkitd[1615]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 18:56:39.871413 polkitd[1615]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 18:56:39.874696 polkitd[1615]: Finished loading, compiling and executing 2 rules Jan 23 18:56:39.875125 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 18:56:39.878647 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 18:56:39.881330 polkitd[1615]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 18:56:39.930270 systemd-hostnamed[1591]: Hostname set to (transient) Jan 23 18:56:39.932496 systemd-resolved[1421]: System hostname changed to 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal'. Jan 23 18:56:39.975370 containerd[1545]: time="2026-01-23T18:56:39.975065321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:56:39.975370 containerd[1545]: time="2026-01-23T18:56:39.975163392Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:56:39.975370 containerd[1545]: time="2026-01-23T18:56:39.975198275Z" level=info msg="Start subscribing containerd event" Jan 23 18:56:39.975370 containerd[1545]: time="2026-01-23T18:56:39.975233663Z" level=info msg="Start recovering state" Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976087174Z" level=info msg="Start event monitor" Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976123089Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976136905Z" level=info msg="Start streaming server" Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976214417Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976228205Z" level=info msg="runtime interface starting up..." Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976239385Z" level=info msg="starting plugins..." Jan 23 18:56:39.976837 containerd[1545]: time="2026-01-23T18:56:39.976266939Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:56:39.978457 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:56:39.979246 containerd[1545]: time="2026-01-23T18:56:39.979200235Z" level=info msg="containerd successfully booted in 0.554805s" Jan 23 18:56:39.997631 systemd-coredump[1635]: Process 1620 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1620: #0 0x000055e251f61aeb n/a (ntpd + 0x68aeb) #1 0x000055e251f0acdf n/a (ntpd + 0x11cdf) #2 0x000055e251f0b575 n/a (ntpd + 0x12575) #3 0x000055e251f06d8a n/a (ntpd + 0xdd8a) #4 0x000055e251f085d3 n/a (ntpd + 0xf5d3) #5 0x000055e251f10fd1 n/a (ntpd + 0x17fd1) #6 0x000055e251f01c2d n/a (ntpd + 0x8c2d) #7 0x00007f4ae277e16c n/a (libc.so.6 + 0x2716c) #8 0x00007f4ae277e229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055e251f01c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 18:56:40.004877 systemd[1]: systemd-coredump@1-1634-0.service: Deactivated successfully. Jan 23 18:56:40.012378 sshd[1628]: Accepted publickey for core from 4.153.228.146 port 59002 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:40.009379 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:40.015271 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 18:56:40.015536 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 18:56:40.038672 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:56:40.051944 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:56:40.089217 systemd-logind[1526]: New session 1 of user core. Jan 23 18:56:40.099707 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:56:40.115495 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:56:40.124737 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 23 18:56:40.127628 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 18:56:40.155976 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:56:40.167027 systemd-logind[1526]: New session c1 of user core. Jan 23 18:56:40.187266 ntpd[1688]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:26:39 UTC 2026 (1): Starting Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: ---------------------------------------------------- Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: ntp-4 is maintained by Network Time Foundation, Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: corporation. Support and training for ntp-4 are Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: available at https://www.nwtime.org/support Jan 23 18:56:40.188325 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: ---------------------------------------------------- Jan 23 18:56:40.187354 ntpd[1688]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: proto: precision = 0.079 usec (-24) Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: basedate set to 2026-01-11 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: gps base set to 2026-01-11 (week 2401) Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listen normally on 3 eth0 10.128.0.7:123 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listen normally on 4 lo [::1]:123 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:7%2]:123 Jan 23 18:56:40.192233 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: Listening on routing socket on fd #22 for interface updates Jan 23 18:56:40.187370 ntpd[1688]: ---------------------------------------------------- Jan 23 18:56:40.187383 ntpd[1688]: ntp-4 is maintained by Network Time Foundation, Jan 23 18:56:40.187400 ntpd[1688]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 18:56:40.187413 ntpd[1688]: corporation. Support and training for ntp-4 are Jan 23 18:56:40.187426 ntpd[1688]: available at https://www.nwtime.org/support Jan 23 18:56:40.187438 ntpd[1688]: ---------------------------------------------------- Jan 23 18:56:40.189609 ntpd[1688]: proto: precision = 0.079 usec (-24) Jan 23 18:56:40.189977 ntpd[1688]: basedate set to 2026-01-11 Jan 23 18:56:40.189999 ntpd[1688]: gps base set to 2026-01-11 (week 2401) Jan 23 18:56:40.195633 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 18:56:40.195633 ntpd[1688]: 23 Jan 18:56:40 ntpd[1688]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 18:56:40.190107 ntpd[1688]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 18:56:40.190144 ntpd[1688]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 18:56:40.190372 ntpd[1688]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 18:56:40.190416 ntpd[1688]: Listen normally on 3 eth0 10.128.0.7:123 Jan 23 18:56:40.190458 ntpd[1688]: Listen normally on 4 lo [::1]:123 Jan 23 18:56:40.190516 ntpd[1688]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:7%2]:123 Jan 23 18:56:40.190554 ntpd[1688]: Listening on routing socket on fd #22 for interface updates Jan 23 18:56:40.195544 ntpd[1688]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 18:56:40.195578 ntpd[1688]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 18:56:40.498666 systemd[1687]: Queued start job for default target default.target. Jan 23 18:56:40.508625 systemd[1687]: Created slice app.slice - User Application Slice. Jan 23 18:56:40.508683 systemd[1687]: Reached target paths.target - Paths. Jan 23 18:56:40.508768 systemd[1687]: Reached target timers.target - Timers. Jan 23 18:56:40.512353 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:56:40.546180 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:56:40.547573 systemd[1687]: Reached target sockets.target - Sockets. Jan 23 18:56:40.547654 systemd[1687]: Reached target basic.target - Basic System. Jan 23 18:56:40.547730 systemd[1687]: Reached target default.target - Main User Target. Jan 23 18:56:40.547788 systemd[1687]: Startup finished in 353ms. Jan 23 18:56:40.548679 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:56:40.566095 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:56:40.670076 instance-setup[1651]: INFO Running google_set_multiqueue. Jan 23 18:56:40.693533 instance-setup[1651]: INFO Set channels for eth0 to 2. Jan 23 18:56:40.700560 instance-setup[1651]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 23 18:56:40.703383 instance-setup[1651]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 23 18:56:40.703458 instance-setup[1651]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 23 18:56:40.705342 instance-setup[1651]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 23 18:56:40.707962 instance-setup[1651]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 23 18:56:40.711925 instance-setup[1651]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 23 18:56:40.711996 instance-setup[1651]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 23 18:56:40.714532 instance-setup[1651]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 23 18:56:40.728685 instance-setup[1651]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 23 18:56:40.735110 instance-setup[1651]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 23 18:56:40.736785 instance-setup[1651]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 23 18:56:40.737056 instance-setup[1651]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 23 18:56:40.776533 systemd[1]: Started sshd@1-10.128.0.7:22-4.153.228.146:59004.service - OpenSSH per-connection server daemon (4.153.228.146:59004). Jan 23 18:56:40.790975 init.sh[1644]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 23 18:56:40.994651 startup-script[1732]: INFO Starting startup scripts. Jan 23 18:56:41.005432 startup-script[1732]: INFO No startup scripts found in metadata. Jan 23 18:56:41.005596 startup-script[1732]: INFO Finished running startup scripts. Jan 23 18:56:41.033915 init.sh[1644]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 23 18:56:41.033915 init.sh[1644]: + daemon_pids=() Jan 23 18:56:41.033915 init.sh[1644]: + for d in accounts clock_skew network Jan 23 18:56:41.033915 init.sh[1644]: + daemon_pids+=($!) Jan 23 18:56:41.033915 init.sh[1644]: + for d in accounts clock_skew network Jan 23 18:56:41.034993 init.sh[1737]: + /usr/bin/google_accounts_daemon Jan 23 18:56:41.035701 init.sh[1738]: + /usr/bin/google_clock_skew_daemon Jan 23 18:56:41.036498 init.sh[1644]: + daemon_pids+=($!) Jan 23 18:56:41.036724 init.sh[1644]: + for d in accounts clock_skew network Jan 23 18:56:41.037103 init.sh[1644]: + daemon_pids+=($!) Jan 23 18:56:41.037449 init.sh[1739]: + /usr/bin/google_network_daemon Jan 23 18:56:41.037869 init.sh[1644]: + NOTIFY_SOCKET=/run/systemd/notify Jan 23 18:56:41.038108 init.sh[1644]: + /usr/bin/systemd-notify --ready Jan 23 18:56:41.052870 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 23 18:56:41.069579 init.sh[1644]: + wait -n 1737 1738 1739 Jan 23 18:56:41.083842 sshd[1731]: Accepted publickey for core from 4.153.228.146 port 59004 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:41.088143 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:41.101182 systemd-logind[1526]: New session 2 of user core. Jan 23 18:56:41.106465 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:56:41.286549 sshd[1741]: Connection closed by 4.153.228.146 port 59004 Jan 23 18:56:41.285101 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:41.299235 systemd[1]: sshd@1-10.128.0.7:22-4.153.228.146:59004.service: Deactivated successfully. Jan 23 18:56:41.307404 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:56:41.314453 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:56:41.335910 systemd[1]: Started sshd@2-10.128.0.7:22-4.153.228.146:59020.service - OpenSSH per-connection server daemon (4.153.228.146:59020). Jan 23 18:56:41.348795 systemd-logind[1526]: Removed session 2. Jan 23 18:56:41.462849 google-clock-skew[1738]: INFO Starting Google Clock Skew daemon. Jan 23 18:56:41.485577 google-clock-skew[1738]: INFO Clock drift token has changed: 0. Jan 23 18:56:41.497219 google-networking[1739]: INFO Starting Google Networking daemon. Jan 23 18:56:42.000496 systemd-resolved[1421]: Clock change detected. Flushing caches. Jan 23 18:56:42.004158 google-clock-skew[1738]: INFO Synced system time with hardware clock. Jan 23 18:56:42.082608 groupadd[1758]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 23 18:56:42.086986 groupadd[1758]: group added to /etc/gshadow: name=google-sudoers Jan 23 18:56:42.103608 sshd[1749]: Accepted publickey for core from 4.153.228.146 port 59020 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:42.105098 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:42.116240 systemd-logind[1526]: New session 3 of user core. Jan 23 18:56:42.121422 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:56:42.156037 groupadd[1758]: new group: name=google-sudoers, GID=1000 Jan 23 18:56:42.187396 google-accounts[1737]: INFO Starting Google Accounts daemon. Jan 23 18:56:42.201707 google-accounts[1737]: WARNING OS Login not installed. Jan 23 18:56:42.203555 google-accounts[1737]: INFO Creating a new user account for 0. Jan 23 18:56:42.210054 init.sh[1767]: useradd: invalid user name '0': use --badname to ignore Jan 23 18:56:42.210436 google-accounts[1737]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 23 18:56:42.283540 sshd[1762]: Connection closed by 4.153.228.146 port 59020 Jan 23 18:56:42.284502 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:42.292990 systemd[1]: sshd@2-10.128.0.7:22-4.153.228.146:59020.service: Deactivated successfully. Jan 23 18:56:42.296274 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:56:42.298563 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:56:42.304007 systemd-logind[1526]: Removed session 3. Jan 23 18:56:42.337853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:56:42.349675 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:56:42.358825 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:56:42.359600 systemd[1]: Startup finished in 4.085s (kernel) + 8.245s (initrd) + 8.903s (userspace) = 21.234s. Jan 23 18:56:43.247788 kubelet[1778]: E0123 18:56:43.247700 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:56:43.251142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:56:43.251434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:56:43.252009 systemd[1]: kubelet.service: Consumed 1.314s CPU time, 266.4M memory peak. Jan 23 18:56:49.501754 systemd[1]: Started sshd@3-10.128.0.7:22-121.204.169.237:36810.service - OpenSSH per-connection server daemon (121.204.169.237:36810). Jan 23 18:56:49.833375 sshd[1790]: Connection closed by 121.204.169.237 port 36810 Jan 23 18:56:49.835290 systemd[1]: sshd@3-10.128.0.7:22-121.204.169.237:36810.service: Deactivated successfully. Jan 23 18:56:52.329657 systemd[1]: Started sshd@4-10.128.0.7:22-4.153.228.146:47524.service - OpenSSH per-connection server daemon (4.153.228.146:47524). Jan 23 18:56:52.576273 sshd[1795]: Accepted publickey for core from 4.153.228.146 port 47524 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:52.577917 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:52.585706 systemd-logind[1526]: New session 4 of user core. Jan 23 18:56:52.588464 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:56:52.748498 sshd[1798]: Connection closed by 4.153.228.146 port 47524 Jan 23 18:56:52.749524 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:52.754660 systemd[1]: sshd@4-10.128.0.7:22-4.153.228.146:47524.service: Deactivated successfully. Jan 23 18:56:52.757343 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:56:52.759832 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:56:52.761745 systemd-logind[1526]: Removed session 4. Jan 23 18:56:52.790638 systemd[1]: Started sshd@5-10.128.0.7:22-4.153.228.146:47534.service - OpenSSH per-connection server daemon (4.153.228.146:47534). Jan 23 18:56:53.039391 sshd[1804]: Accepted publickey for core from 4.153.228.146 port 47534 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:53.041234 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:53.048600 systemd-logind[1526]: New session 5 of user core. Jan 23 18:56:53.051406 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:56:53.202086 sshd[1807]: Connection closed by 4.153.228.146 port 47534 Jan 23 18:56:53.203503 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:53.209140 systemd[1]: sshd@5-10.128.0.7:22-4.153.228.146:47534.service: Deactivated successfully. Jan 23 18:56:53.211981 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:56:53.213150 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:56:53.215303 systemd-logind[1526]: Removed session 5. Jan 23 18:56:53.247573 systemd[1]: Started sshd@6-10.128.0.7:22-4.153.228.146:47550.service - OpenSSH per-connection server daemon (4.153.228.146:47550). Jan 23 18:56:53.254016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:56:53.256969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:56:53.495036 sshd[1813]: Accepted publickey for core from 4.153.228.146 port 47550 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:53.497579 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:53.509512 systemd-logind[1526]: New session 6 of user core. Jan 23 18:56:53.515444 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:56:53.616994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:56:53.630775 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:56:53.670465 sshd[1819]: Connection closed by 4.153.228.146 port 47550 Jan 23 18:56:53.673541 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:53.681585 systemd[1]: sshd@6-10.128.0.7:22-4.153.228.146:47550.service: Deactivated successfully. Jan 23 18:56:53.685464 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:56:53.688150 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:56:53.689809 systemd-logind[1526]: Removed session 6. Jan 23 18:56:53.701226 kubelet[1825]: E0123 18:56:53.701107 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:56:53.709930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:56:53.710512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:56:53.711276 systemd[1]: kubelet.service: Consumed 219ms CPU time, 110.2M memory peak. Jan 23 18:56:53.714711 systemd[1]: Started sshd@7-10.128.0.7:22-4.153.228.146:47564.service - OpenSSH per-connection server daemon (4.153.228.146:47564). Jan 23 18:56:53.944091 sshd[1837]: Accepted publickey for core from 4.153.228.146 port 47564 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:53.945755 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:53.953235 systemd-logind[1526]: New session 7 of user core. Jan 23 18:56:53.960415 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:56:54.105754 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:56:54.106285 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:56:54.122591 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 23 18:56:54.153561 sshd[1840]: Connection closed by 4.153.228.146 port 47564 Jan 23 18:56:54.154782 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:54.161514 systemd[1]: sshd@7-10.128.0.7:22-4.153.228.146:47564.service: Deactivated successfully. Jan 23 18:56:54.163994 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:56:54.165784 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:56:54.167723 systemd-logind[1526]: Removed session 7. Jan 23 18:56:54.199642 systemd[1]: Started sshd@8-10.128.0.7:22-4.153.228.146:47574.service - OpenSSH per-connection server daemon (4.153.228.146:47574). Jan 23 18:56:54.451284 sshd[1847]: Accepted publickey for core from 4.153.228.146 port 47574 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:54.453115 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:54.460690 systemd-logind[1526]: New session 8 of user core. Jan 23 18:56:54.466386 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:56:54.596099 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:56:54.596606 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:56:54.603522 sudo[1852]: pam_unix(sudo:session): session closed for user root Jan 23 18:56:54.617312 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:56:54.617772 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:56:54.630414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:56:54.678304 augenrules[1874]: No rules Jan 23 18:56:54.679956 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:56:54.680329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:56:54.682240 sudo[1851]: pam_unix(sudo:session): session closed for user root Jan 23 18:56:54.713802 sshd[1850]: Connection closed by 4.153.228.146 port 47574 Jan 23 18:56:54.714975 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 23 18:56:54.720322 systemd[1]: sshd@8-10.128.0.7:22-4.153.228.146:47574.service: Deactivated successfully. Jan 23 18:56:54.722850 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:56:54.724401 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:56:54.727310 systemd-logind[1526]: Removed session 8. Jan 23 18:56:54.760665 systemd[1]: Started sshd@9-10.128.0.7:22-4.153.228.146:44882.service - OpenSSH per-connection server daemon (4.153.228.146:44882). Jan 23 18:56:55.002201 sshd[1883]: Accepted publickey for core from 4.153.228.146 port 44882 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:56:55.004210 sshd-session[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:56:55.011799 systemd-logind[1526]: New session 9 of user core. Jan 23 18:56:55.021456 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:56:55.146954 sudo[1887]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:56:55.147509 sudo[1887]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:56:55.654871 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:56:55.666820 (dockerd)[1906]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:56:56.018424 dockerd[1906]: time="2026-01-23T18:56:56.017641547Z" level=info msg="Starting up" Jan 23 18:56:56.022769 dockerd[1906]: time="2026-01-23T18:56:56.022498543Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:56:56.039726 dockerd[1906]: time="2026-01-23T18:56:56.039669258Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:56:56.089894 dockerd[1906]: time="2026-01-23T18:56:56.089798375Z" level=info msg="Loading containers: start." Jan 23 18:56:56.110202 kernel: Initializing XFRM netlink socket Jan 23 18:56:56.470822 systemd-networkd[1420]: docker0: Link UP Jan 23 18:56:56.478423 dockerd[1906]: time="2026-01-23T18:56:56.478334653Z" level=info msg="Loading containers: done." Jan 23 18:56:56.500616 dockerd[1906]: time="2026-01-23T18:56:56.500390227Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:56:56.500616 dockerd[1906]: time="2026-01-23T18:56:56.500499479Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:56:56.501347 dockerd[1906]: time="2026-01-23T18:56:56.500972831Z" level=info msg="Initializing buildkit" Jan 23 18:56:56.532423 dockerd[1906]: time="2026-01-23T18:56:56.532374096Z" level=info msg="Completed buildkit initialization" Jan 23 18:56:56.544916 dockerd[1906]: time="2026-01-23T18:56:56.544849202Z" level=info msg="Daemon has completed initialization" Jan 23 18:56:56.545246 dockerd[1906]: time="2026-01-23T18:56:56.544949092Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:56:56.545557 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:56:57.479825 containerd[1545]: time="2026-01-23T18:56:57.479767488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 18:56:57.915390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028554965.mount: Deactivated successfully. Jan 23 18:56:59.644837 containerd[1545]: time="2026-01-23T18:56:59.644738518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:56:59.647625 containerd[1545]: time="2026-01-23T18:56:59.647554805Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30122799" Jan 23 18:56:59.651724 containerd[1545]: time="2026-01-23T18:56:59.651201111Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:56:59.655043 containerd[1545]: time="2026-01-23T18:56:59.655002891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:56:59.656363 containerd[1545]: time="2026-01-23T18:56:59.656316718Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.17648795s" Jan 23 18:56:59.656487 containerd[1545]: time="2026-01-23T18:56:59.656372194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 18:56:59.657688 containerd[1545]: time="2026-01-23T18:56:59.657655368Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 18:57:01.336860 containerd[1545]: time="2026-01-23T18:57:01.336788729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:01.338540 containerd[1545]: time="2026-01-23T18:57:01.338496237Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26018839" Jan 23 18:57:01.340064 containerd[1545]: time="2026-01-23T18:57:01.339999635Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:01.345261 containerd[1545]: time="2026-01-23T18:57:01.345193962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:01.347199 containerd[1545]: time="2026-01-23T18:57:01.346507181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.688684517s" Jan 23 18:57:01.347199 containerd[1545]: time="2026-01-23T18:57:01.346552216Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 18:57:01.347369 containerd[1545]: time="2026-01-23T18:57:01.347304090Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 18:57:02.682323 containerd[1545]: time="2026-01-23T18:57:02.682255088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:02.683733 containerd[1545]: time="2026-01-23T18:57:02.683689277Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20160142" Jan 23 18:57:02.685123 containerd[1545]: time="2026-01-23T18:57:02.685038452Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:02.689562 containerd[1545]: time="2026-01-23T18:57:02.689494161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:02.691079 containerd[1545]: time="2026-01-23T18:57:02.690873135Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.343535227s" Jan 23 18:57:02.691079 containerd[1545]: time="2026-01-23T18:57:02.690921396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 18:57:02.691783 containerd[1545]: time="2026-01-23T18:57:02.691740033Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:57:03.799092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018446861.mount: Deactivated successfully. Jan 23 18:57:03.801800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:57:03.806892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:04.138745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:04.151784 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:04.226699 kubelet[2198]: E0123 18:57:04.226637 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:04.231794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:04.232240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:04.233261 systemd[1]: kubelet.service: Consumed 247ms CPU time, 109.8M memory peak. Jan 23 18:57:04.645152 containerd[1545]: time="2026-01-23T18:57:04.645078332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:04.646549 containerd[1545]: time="2026-01-23T18:57:04.646368076Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31932119" Jan 23 18:57:04.648674 containerd[1545]: time="2026-01-23T18:57:04.648630872Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:04.651129 containerd[1545]: time="2026-01-23T18:57:04.651062403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:04.652178 containerd[1545]: time="2026-01-23T18:57:04.651878785Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.959934678s" Jan 23 18:57:04.652178 containerd[1545]: time="2026-01-23T18:57:04.651922489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:57:04.652921 containerd[1545]: time="2026-01-23T18:57:04.652751246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 18:57:05.140310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905215185.mount: Deactivated successfully. Jan 23 18:57:06.427898 containerd[1545]: time="2026-01-23T18:57:06.427821562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:06.429818 containerd[1545]: time="2026-01-23T18:57:06.429763141Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20949320" Jan 23 18:57:06.431083 containerd[1545]: time="2026-01-23T18:57:06.430796622Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:06.435050 containerd[1545]: time="2026-01-23T18:57:06.434968258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:06.436918 containerd[1545]: time="2026-01-23T18:57:06.436749913Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.783956078s" Jan 23 18:57:06.436918 containerd[1545]: time="2026-01-23T18:57:06.436795244Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 18:57:06.437422 containerd[1545]: time="2026-01-23T18:57:06.437383714Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:57:06.857962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762174834.mount: Deactivated successfully. Jan 23 18:57:06.865923 containerd[1545]: time="2026-01-23T18:57:06.865858068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:06.867133 containerd[1545]: time="2026-01-23T18:57:06.867058685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322136" Jan 23 18:57:06.868506 containerd[1545]: time="2026-01-23T18:57:06.868447775Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:06.872703 containerd[1545]: time="2026-01-23T18:57:06.872626511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:06.874198 containerd[1545]: time="2026-01-23T18:57:06.873547388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.998689ms" Jan 23 18:57:06.874198 containerd[1545]: time="2026-01-23T18:57:06.873592921Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:57:06.874371 containerd[1545]: time="2026-01-23T18:57:06.874327262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 18:57:07.257568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508770458.mount: Deactivated successfully. Jan 23 18:57:09.609197 containerd[1545]: time="2026-01-23T18:57:09.609125983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:09.610632 containerd[1545]: time="2026-01-23T18:57:09.610582727Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58933254" Jan 23 18:57:09.612330 containerd[1545]: time="2026-01-23T18:57:09.612250661Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:09.616254 containerd[1545]: time="2026-01-23T18:57:09.615923960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:09.617511 containerd[1545]: time="2026-01-23T18:57:09.617344819Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.742969729s" Jan 23 18:57:09.617511 containerd[1545]: time="2026-01-23T18:57:09.617388079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 18:57:10.433663 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 18:57:13.479373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:13.480489 systemd[1]: kubelet.service: Consumed 247ms CPU time, 109.8M memory peak. Jan 23 18:57:13.483744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:13.534245 systemd[1]: Reload requested from client PID 2348 ('systemctl') (unit session-9.scope)... Jan 23 18:57:13.534269 systemd[1]: Reloading... Jan 23 18:57:13.690216 zram_generator::config[2388]: No configuration found. Jan 23 18:57:14.047823 systemd[1]: Reloading finished in 512 ms. Jan 23 18:57:14.105330 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:57:14.105472 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:57:14.105881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:14.105955 systemd[1]: kubelet.service: Consumed 131ms CPU time, 82.2M memory peak. Jan 23 18:57:14.109302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:14.555281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:14.571062 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:57:14.624862 kubelet[2440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:14.625344 kubelet[2440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:57:14.625344 kubelet[2440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:14.625344 kubelet[2440]: I0123 18:57:14.625206 2440 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:57:15.275222 kubelet[2440]: I0123 18:57:15.275119 2440 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:57:15.275222 kubelet[2440]: I0123 18:57:15.275152 2440 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:57:15.275619 kubelet[2440]: I0123 18:57:15.275580 2440 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:57:15.333114 kubelet[2440]: E0123 18:57:15.333051 2440 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:57:15.338237 kubelet[2440]: I0123 18:57:15.338160 2440 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:57:15.350306 kubelet[2440]: I0123 18:57:15.350266 2440 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:57:15.356291 kubelet[2440]: I0123 18:57:15.356237 2440 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:57:15.356681 kubelet[2440]: I0123 18:57:15.356620 2440 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:57:15.356928 kubelet[2440]: I0123 18:57:15.356669 2440 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:57:15.357111 kubelet[2440]: I0123 18:57:15.356930 2440 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:57:15.357111 kubelet[2440]: I0123 18:57:15.356950 2440 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:57:15.358587 kubelet[2440]: I0123 18:57:15.358536 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:15.362428 kubelet[2440]: I0123 18:57:15.362398 2440 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:57:15.362428 kubelet[2440]: I0123 18:57:15.362433 2440 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:57:15.362603 kubelet[2440]: I0123 18:57:15.362471 2440 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:57:15.365046 kubelet[2440]: I0123 18:57:15.364716 2440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:57:15.374335 kubelet[2440]: E0123 18:57:15.374278 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:57:15.374465 kubelet[2440]: I0123 18:57:15.374412 2440 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:57:15.375828 kubelet[2440]: I0123 18:57:15.375128 2440 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:57:15.377571 kubelet[2440]: W0123 18:57:15.376742 2440 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:57:15.388825 kubelet[2440]: E0123 18:57:15.388773 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:57:15.399658 kubelet[2440]: I0123 18:57:15.399610 2440 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:57:15.399804 kubelet[2440]: I0123 18:57:15.399698 2440 server.go:1289] "Started kubelet" Jan 23 18:57:15.401930 kubelet[2440]: I0123 18:57:15.400997 2440 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:57:15.402375 kubelet[2440]: I0123 18:57:15.402337 2440 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:57:15.406711 kubelet[2440]: I0123 18:57:15.406628 2440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:57:15.407329 kubelet[2440]: I0123 18:57:15.407306 2440 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:57:15.410689 kubelet[2440]: I0123 18:57:15.409605 2440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:57:15.410889 kubelet[2440]: E0123 18:57:15.408528 2440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal.188d7125b585ccdb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,UID:ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,},FirstTimestamp:2026-01-23 18:57:15.399646427 +0000 UTC m=+0.822490502,LastTimestamp:2026-01-23 18:57:15.399646427 +0000 UTC m=+0.822490502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,}" Jan 23 18:57:15.411716 kubelet[2440]: I0123 18:57:15.411667 2440 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:57:15.416589 kubelet[2440]: E0123 18:57:15.416541 2440 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" Jan 23 18:57:15.416802 kubelet[2440]: I0123 18:57:15.416718 2440 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:57:15.417838 kubelet[2440]: I0123 18:57:15.417320 2440 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:57:15.417838 kubelet[2440]: I0123 18:57:15.417395 2440 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:57:15.418286 kubelet[2440]: E0123 18:57:15.418253 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:57:15.418631 kubelet[2440]: I0123 18:57:15.418607 2440 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:57:15.418864 kubelet[2440]: I0123 18:57:15.418839 2440 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:57:15.420917 kubelet[2440]: E0123 18:57:15.420849 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.7:6443: connect: connection refused" interval="200ms" Jan 23 18:57:15.423209 kubelet[2440]: E0123 18:57:15.421678 2440 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:57:15.423209 kubelet[2440]: I0123 18:57:15.421840 2440 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:57:15.441359 kubelet[2440]: I0123 18:57:15.441308 2440 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:57:15.448542 kubelet[2440]: I0123 18:57:15.448489 2440 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:57:15.448758 kubelet[2440]: I0123 18:57:15.448742 2440 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:57:15.448907 kubelet[2440]: I0123 18:57:15.448888 2440 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:57:15.449023 kubelet[2440]: I0123 18:57:15.449011 2440 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:57:15.449225 kubelet[2440]: E0123 18:57:15.449161 2440 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:57:15.452000 kubelet[2440]: E0123 18:57:15.451953 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:57:15.458809 kubelet[2440]: I0123 18:57:15.458788 2440 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:57:15.459027 kubelet[2440]: I0123 18:57:15.459010 2440 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:57:15.459262 kubelet[2440]: I0123 18:57:15.459241 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:15.461869 kubelet[2440]: I0123 18:57:15.461831 2440 policy_none.go:49] "None policy: Start" Jan 23 18:57:15.461869 kubelet[2440]: I0123 18:57:15.461858 2440 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:57:15.461869 kubelet[2440]: I0123 18:57:15.461877 2440 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:57:15.470428 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:57:15.485789 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:57:15.491294 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:57:15.503352 kubelet[2440]: E0123 18:57:15.503320 2440 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:57:15.503895 kubelet[2440]: I0123 18:57:15.503843 2440 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:57:15.503895 kubelet[2440]: I0123 18:57:15.503866 2440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:57:15.504811 kubelet[2440]: I0123 18:57:15.504291 2440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:57:15.507575 kubelet[2440]: E0123 18:57:15.507421 2440 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:57:15.507575 kubelet[2440]: E0123 18:57:15.507477 2440 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" Jan 23 18:57:15.574275 systemd[1]: Created slice kubepods-burstable-pod8dc8c417046c07666a5e5c9bf746b755.slice - libcontainer container kubepods-burstable-pod8dc8c417046c07666a5e5c9bf746b755.slice. Jan 23 18:57:15.589298 kubelet[2440]: E0123 18:57:15.589228 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.594282 systemd[1]: Created slice kubepods-burstable-pod22d51d356e8b68261ab02b402dde8b01.slice - libcontainer container kubepods-burstable-pod22d51d356e8b68261ab02b402dde8b01.slice. Jan 23 18:57:15.602560 kubelet[2440]: E0123 18:57:15.601728 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.609544 kubelet[2440]: I0123 18:57:15.609241 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.609301 systemd[1]: Created slice kubepods-burstable-poddab608dd377e396230a9e7750cbcd641.slice - libcontainer container kubepods-burstable-poddab608dd377e396230a9e7750cbcd641.slice. Jan 23 18:57:15.609963 kubelet[2440]: E0123 18:57:15.609681 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.7:6443/api/v1/nodes\": dial tcp 10.128.0.7:6443: connect: connection refused" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.613050 kubelet[2440]: E0123 18:57:15.613001 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.621706 kubelet[2440]: E0123 18:57:15.621659 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.7:6443: connect: connection refused" interval="400ms" Jan 23 18:57:15.719209 kubelet[2440]: I0123 18:57:15.719128 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dc8c417046c07666a5e5c9bf746b755-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"8dc8c417046c07666a5e5c9bf746b755\") " pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719812 kubelet[2440]: I0123 18:57:15.719223 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719812 kubelet[2440]: I0123 18:57:15.719261 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719812 kubelet[2440]: I0123 18:57:15.719290 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719812 kubelet[2440]: I0123 18:57:15.719321 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dc8c417046c07666a5e5c9bf746b755-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"8dc8c417046c07666a5e5c9bf746b755\") " pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719953 kubelet[2440]: I0123 18:57:15.719352 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dc8c417046c07666a5e5c9bf746b755-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"8dc8c417046c07666a5e5c9bf746b755\") " pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719953 kubelet[2440]: I0123 18:57:15.719379 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719953 kubelet[2440]: I0123 18:57:15.719412 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.719953 kubelet[2440]: I0123 18:57:15.719443 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dab608dd377e396230a9e7750cbcd641-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"dab608dd377e396230a9e7750cbcd641\") " pod="kube-system/kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.817361 kubelet[2440]: I0123 18:57:15.817238 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.817851 kubelet[2440]: E0123 18:57:15.817737 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.7:6443/api/v1/nodes\": dial tcp 10.128.0.7:6443: connect: connection refused" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:15.891397 containerd[1545]: time="2026-01-23T18:57:15.891216027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,Uid:8dc8c417046c07666a5e5c9bf746b755,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:15.904245 containerd[1545]: time="2026-01-23T18:57:15.904160300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,Uid:22d51d356e8b68261ab02b402dde8b01,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:15.931358 containerd[1545]: time="2026-01-23T18:57:15.931242934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,Uid:dab608dd377e396230a9e7750cbcd641,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:15.939499 containerd[1545]: time="2026-01-23T18:57:15.939415877Z" level=info msg="connecting to shim c84132b331c954f25c4a579f10c55baf09b6aa834c5f43fcd7fab4ca82f96830" address="unix:///run/containerd/s/322c8cfad9597d8ffc3161b8ff39f7c26f1ae1378af53343ba649c2aff80e86d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:15.967128 containerd[1545]: time="2026-01-23T18:57:15.967069035Z" level=info msg="connecting to shim 2e6912599357b204b80ad704510dc02c9b3edab877320e6a3f811f2dba61a774" address="unix:///run/containerd/s/4872c1b96f8245cbeadf883a35bc733cb74bb610fb613277cb23e71e4004f68d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:16.010419 containerd[1545]: time="2026-01-23T18:57:16.010316704Z" level=info msg="connecting to shim 4747489e88922d3fdb77f70b783042ab962c2d29ab8330ec96af5da95729400e" address="unix:///run/containerd/s/7d550ac4366ecccc69e8cea8d1e89eace02d4c851cb2d44e3bd43b53c50efac5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:16.022643 kubelet[2440]: E0123 18:57:16.022552 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.7:6443: connect: connection refused" interval="800ms" Jan 23 18:57:16.041661 systemd[1]: Started cri-containerd-c84132b331c954f25c4a579f10c55baf09b6aa834c5f43fcd7fab4ca82f96830.scope - libcontainer container c84132b331c954f25c4a579f10c55baf09b6aa834c5f43fcd7fab4ca82f96830. Jan 23 18:57:16.053118 systemd[1]: Started cri-containerd-2e6912599357b204b80ad704510dc02c9b3edab877320e6a3f811f2dba61a774.scope - libcontainer container 2e6912599357b204b80ad704510dc02c9b3edab877320e6a3f811f2dba61a774. Jan 23 18:57:16.076507 systemd[1]: Started cri-containerd-4747489e88922d3fdb77f70b783042ab962c2d29ab8330ec96af5da95729400e.scope - libcontainer container 4747489e88922d3fdb77f70b783042ab962c2d29ab8330ec96af5da95729400e. Jan 23 18:57:16.174003 containerd[1545]: time="2026-01-23T18:57:16.173864315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,Uid:8dc8c417046c07666a5e5c9bf746b755,Namespace:kube-system,Attempt:0,} returns sandbox id \"c84132b331c954f25c4a579f10c55baf09b6aa834c5f43fcd7fab4ca82f96830\"" Jan 23 18:57:16.181193 kubelet[2440]: E0123 18:57:16.180002 2440 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-21291" Jan 23 18:57:16.187135 containerd[1545]: time="2026-01-23T18:57:16.187039900Z" level=info msg="CreateContainer within sandbox \"c84132b331c954f25c4a579f10c55baf09b6aa834c5f43fcd7fab4ca82f96830\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:57:16.199832 containerd[1545]: time="2026-01-23T18:57:16.199720360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,Uid:22d51d356e8b68261ab02b402dde8b01,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e6912599357b204b80ad704510dc02c9b3edab877320e6a3f811f2dba61a774\"" Jan 23 18:57:16.204670 containerd[1545]: time="2026-01-23T18:57:16.204615160Z" level=info msg="Container d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:16.206524 kubelet[2440]: E0123 18:57:16.206480 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:57:16.207483 kubelet[2440]: E0123 18:57:16.207444 2440 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flat" Jan 23 18:57:16.212562 containerd[1545]: time="2026-01-23T18:57:16.212284560Z" level=info msg="CreateContainer within sandbox \"2e6912599357b204b80ad704510dc02c9b3edab877320e6a3f811f2dba61a774\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:57:16.221824 containerd[1545]: time="2026-01-23T18:57:16.221783486Z" level=info msg="CreateContainer within sandbox \"c84132b331c954f25c4a579f10c55baf09b6aa834c5f43fcd7fab4ca82f96830\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0\"" Jan 23 18:57:16.223632 containerd[1545]: time="2026-01-23T18:57:16.223599858Z" level=info msg="StartContainer for \"d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0\"" Jan 23 18:57:16.224454 kubelet[2440]: I0123 18:57:16.224288 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:16.225436 kubelet[2440]: E0123 18:57:16.225324 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.7:6443/api/v1/nodes\": dial tcp 10.128.0.7:6443: connect: connection refused" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:16.226245 containerd[1545]: time="2026-01-23T18:57:16.226201791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal,Uid:dab608dd377e396230a9e7750cbcd641,Namespace:kube-system,Attempt:0,} returns sandbox id \"4747489e88922d3fdb77f70b783042ab962c2d29ab8330ec96af5da95729400e\"" Jan 23 18:57:16.226772 containerd[1545]: time="2026-01-23T18:57:16.226741544Z" level=info msg="connecting to shim d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0" address="unix:///run/containerd/s/322c8cfad9597d8ffc3161b8ff39f7c26f1ae1378af53343ba649c2aff80e86d" protocol=ttrpc version=3 Jan 23 18:57:16.229579 kubelet[2440]: E0123 18:57:16.229532 2440 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-21291" Jan 23 18:57:16.232858 containerd[1545]: time="2026-01-23T18:57:16.232791892Z" level=info msg="Container e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:16.235656 containerd[1545]: time="2026-01-23T18:57:16.235545418Z" level=info msg="CreateContainer within sandbox \"4747489e88922d3fdb77f70b783042ab962c2d29ab8330ec96af5da95729400e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:57:16.248929 containerd[1545]: time="2026-01-23T18:57:16.248825225Z" level=info msg="CreateContainer within sandbox \"2e6912599357b204b80ad704510dc02c9b3edab877320e6a3f811f2dba61a774\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375\"" Jan 23 18:57:16.249666 containerd[1545]: time="2026-01-23T18:57:16.249626941Z" level=info msg="StartContainer for \"e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375\"" Jan 23 18:57:16.252722 containerd[1545]: time="2026-01-23T18:57:16.252676505Z" level=info msg="Container b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:16.252948 containerd[1545]: time="2026-01-23T18:57:16.252910939Z" level=info msg="connecting to shim e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375" address="unix:///run/containerd/s/4872c1b96f8245cbeadf883a35bc733cb74bb610fb613277cb23e71e4004f68d" protocol=ttrpc version=3 Jan 23 18:57:16.264570 systemd[1]: Started cri-containerd-d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0.scope - libcontainer container d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0. Jan 23 18:57:16.271187 containerd[1545]: time="2026-01-23T18:57:16.271082081Z" level=info msg="CreateContainer within sandbox \"4747489e88922d3fdb77f70b783042ab962c2d29ab8330ec96af5da95729400e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d\"" Jan 23 18:57:16.273204 containerd[1545]: time="2026-01-23T18:57:16.272386541Z" level=info msg="StartContainer for \"b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d\"" Jan 23 18:57:16.274565 kubelet[2440]: E0123 18:57:16.274518 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:57:16.275792 containerd[1545]: time="2026-01-23T18:57:16.275736964Z" level=info msg="connecting to shim b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d" address="unix:///run/containerd/s/7d550ac4366ecccc69e8cea8d1e89eace02d4c851cb2d44e3bd43b53c50efac5" protocol=ttrpc version=3 Jan 23 18:57:16.299341 systemd[1]: Started cri-containerd-e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375.scope - libcontainer container e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375. Jan 23 18:57:16.316427 systemd[1]: Started cri-containerd-b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d.scope - libcontainer container b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d. Jan 23 18:57:16.423728 kubelet[2440]: E0123 18:57:16.423665 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:57:16.429481 containerd[1545]: time="2026-01-23T18:57:16.428191114Z" level=info msg="StartContainer for \"d8e39b8c4d45127558f3aa829a70f61728748ee36b5bd43fcae2ffe065b391a0\" returns successfully" Jan 23 18:57:16.450779 containerd[1545]: time="2026-01-23T18:57:16.450348142Z" level=info msg="StartContainer for \"e3b834d48ab24bee718b3a06713341d9f6587f5502dd359dbddb9d3795c3e375\" returns successfully" Jan 23 18:57:16.473598 kubelet[2440]: E0123 18:57:16.473269 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:16.482465 kubelet[2440]: E0123 18:57:16.482327 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:16.494202 containerd[1545]: time="2026-01-23T18:57:16.493469202Z" level=info msg="StartContainer for \"b2f7771006e055ad4863414afa956d9dcaa67b9b37f481a09eeccfe5c93e8f2d\" returns successfully" Jan 23 18:57:17.031228 kubelet[2440]: I0123 18:57:17.030488 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:17.486889 kubelet[2440]: E0123 18:57:17.486157 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:17.486889 kubelet[2440]: E0123 18:57:17.486617 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:18.489194 kubelet[2440]: E0123 18:57:18.488895 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:18.491653 kubelet[2440]: E0123 18:57:18.490648 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.385191 kubelet[2440]: I0123 18:57:19.385105 2440 apiserver.go:52] "Watching apiserver" Jan 23 18:57:19.492049 kubelet[2440]: E0123 18:57:19.492005 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" not found" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.518122 kubelet[2440]: I0123 18:57:19.518070 2440 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:57:19.526951 kubelet[2440]: I0123 18:57:19.526906 2440 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.622747 kubelet[2440]: I0123 18:57:19.622698 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.693822 kubelet[2440]: E0123 18:57:19.692058 2440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.693822 kubelet[2440]: I0123 18:57:19.693320 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.703193 kubelet[2440]: E0123 18:57:19.703041 2440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.703193 kubelet[2440]: I0123 18:57:19.703089 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:19.703642 kubelet[2440]: E0123 18:57:19.703610 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Jan 23 18:57:19.710690 kubelet[2440]: E0123 18:57:19.710641 2440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:21.550650 systemd[1]: Reload requested from client PID 2718 ('systemctl') (unit session-9.scope)... Jan 23 18:57:21.550672 systemd[1]: Reloading... Jan 23 18:57:21.746212 zram_generator::config[2762]: No configuration found. Jan 23 18:57:22.089532 systemd[1]: Reloading finished in 538 ms. Jan 23 18:57:22.135940 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:22.155007 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:57:22.155420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:22.155520 systemd[1]: kubelet.service: Consumed 1.363s CPU time, 130.9M memory peak. Jan 23 18:57:22.158302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:22.544090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:22.556114 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:57:22.631099 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:22.631099 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:57:22.631099 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:22.633199 kubelet[2810]: I0123 18:57:22.631881 2810 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:57:22.643502 kubelet[2810]: I0123 18:57:22.643463 2810 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:57:22.643666 kubelet[2810]: I0123 18:57:22.643649 2810 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:57:22.644545 kubelet[2810]: I0123 18:57:22.644501 2810 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:57:22.649278 kubelet[2810]: I0123 18:57:22.648538 2810 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:57:22.656605 kubelet[2810]: I0123 18:57:22.655555 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:57:22.663218 kubelet[2810]: I0123 18:57:22.663033 2810 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:57:22.667380 kubelet[2810]: I0123 18:57:22.667017 2810 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:57:22.667380 kubelet[2810]: I0123 18:57:22.667336 2810 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:57:22.667643 kubelet[2810]: I0123 18:57:22.667374 2810 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:57:22.667862 kubelet[2810]: I0123 18:57:22.667663 2810 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:57:22.667862 kubelet[2810]: I0123 18:57:22.667680 2810 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:57:22.667862 kubelet[2810]: I0123 18:57:22.667755 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:22.668241 kubelet[2810]: I0123 18:57:22.667977 2810 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:57:22.668241 kubelet[2810]: I0123 18:57:22.667998 2810 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:57:22.668241 kubelet[2810]: I0123 18:57:22.668030 2810 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:57:22.668241 kubelet[2810]: I0123 18:57:22.668047 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:57:22.672194 kubelet[2810]: I0123 18:57:22.671962 2810 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:57:22.673859 kubelet[2810]: I0123 18:57:22.673825 2810 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:57:22.716148 kubelet[2810]: I0123 18:57:22.715332 2810 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:57:22.716148 kubelet[2810]: I0123 18:57:22.715433 2810 server.go:1289] "Started kubelet" Jan 23 18:57:22.717522 kubelet[2810]: I0123 18:57:22.716922 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:57:22.717522 kubelet[2810]: I0123 18:57:22.717493 2810 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:57:22.721201 kubelet[2810]: I0123 18:57:22.720438 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:57:22.722067 kubelet[2810]: I0123 18:57:22.721729 2810 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:57:22.730354 kubelet[2810]: I0123 18:57:22.730254 2810 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:57:22.730938 kubelet[2810]: I0123 18:57:22.730899 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:57:22.735266 kubelet[2810]: I0123 18:57:22.733424 2810 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:57:22.735266 kubelet[2810]: I0123 18:57:22.734578 2810 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:57:22.736469 kubelet[2810]: I0123 18:57:22.735448 2810 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:57:22.739436 kubelet[2810]: I0123 18:57:22.739268 2810 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:57:22.739767 kubelet[2810]: I0123 18:57:22.739592 2810 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:57:22.751644 kubelet[2810]: I0123 18:57:22.751591 2810 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:57:22.777277 kubelet[2810]: I0123 18:57:22.777236 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:57:22.805997 kubelet[2810]: I0123 18:57:22.803839 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:57:22.805997 kubelet[2810]: I0123 18:57:22.803872 2810 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:57:22.805997 kubelet[2810]: I0123 18:57:22.803900 2810 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:57:22.805997 kubelet[2810]: I0123 18:57:22.803911 2810 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:57:22.805997 kubelet[2810]: E0123 18:57:22.803971 2810 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.851721 2810 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.851742 2810 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.851771 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.852020 2810 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.852046 2810 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.852075 2810 policy_none.go:49] "None policy: Start" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.852092 2810 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.852108 2810 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:57:22.852484 kubelet[2810]: I0123 18:57:22.852280 2810 state_mem.go:75] "Updated machine memory state" Jan 23 18:57:22.860197 kubelet[2810]: E0123 18:57:22.860034 2810 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:57:22.860515 kubelet[2810]: I0123 18:57:22.860492 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:57:22.861755 kubelet[2810]: I0123 18:57:22.861649 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:57:22.863310 kubelet[2810]: I0123 18:57:22.863239 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:57:22.872417 kubelet[2810]: E0123 18:57:22.872382 2810 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:57:22.905470 kubelet[2810]: I0123 18:57:22.905243 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.906219 kubelet[2810]: I0123 18:57:22.906142 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.906505 kubelet[2810]: I0123 18:57:22.906140 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.915965 kubelet[2810]: I0123 18:57:22.915903 2810 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jan 23 18:57:22.918629 kubelet[2810]: I0123 18:57:22.918237 2810 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jan 23 18:57:22.920103 kubelet[2810]: I0123 18:57:22.919686 2810 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jan 23 18:57:22.937060 kubelet[2810]: I0123 18:57:22.936682 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937060 kubelet[2810]: I0123 18:57:22.936746 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937060 kubelet[2810]: I0123 18:57:22.936786 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937060 kubelet[2810]: I0123 18:57:22.936819 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937440 kubelet[2810]: I0123 18:57:22.936851 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d51d356e8b68261ab02b402dde8b01-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"22d51d356e8b68261ab02b402dde8b01\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937440 kubelet[2810]: I0123 18:57:22.936883 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dab608dd377e396230a9e7750cbcd641-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"dab608dd377e396230a9e7750cbcd641\") " pod="kube-system/kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937440 kubelet[2810]: I0123 18:57:22.936914 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dc8c417046c07666a5e5c9bf746b755-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"8dc8c417046c07666a5e5c9bf746b755\") " pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937440 kubelet[2810]: I0123 18:57:22.936947 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dc8c417046c07666a5e5c9bf746b755-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"8dc8c417046c07666a5e5c9bf746b755\") " pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.937686 kubelet[2810]: I0123 18:57:22.936990 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dc8c417046c07666a5e5c9bf746b755-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" (UID: \"8dc8c417046c07666a5e5c9bf746b755\") " pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.978206 kubelet[2810]: I0123 18:57:22.977628 2810 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.991441 kubelet[2810]: I0123 18:57:22.990995 2810 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:22.991441 kubelet[2810]: I0123 18:57:22.991115 2810 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:57:23.670187 kubelet[2810]: I0123 18:57:23.670073 2810 apiserver.go:52] "Watching apiserver" Jan 23 18:57:23.736221 kubelet[2810]: I0123 18:57:23.734740 2810 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:57:23.888787 kubelet[2810]: I0123 18:57:23.888708 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" podStartSLOduration=1.888685105 podStartE2EDuration="1.888685105s" podCreationTimestamp="2026-01-23 18:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:23.873744585 +0000 UTC m=+1.311099959" watchObservedRunningTime="2026-01-23 18:57:23.888685105 +0000 UTC m=+1.326040474" Jan 23 18:57:23.889025 kubelet[2810]: I0123 18:57:23.888879 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" podStartSLOduration=1.8888710039999999 podStartE2EDuration="1.888871004s" podCreationTimestamp="2026-01-23 18:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:23.888345389 +0000 UTC m=+1.325700759" watchObservedRunningTime="2026-01-23 18:57:23.888871004 +0000 UTC m=+1.326226374" Jan 23 18:57:24.127069 update_engine[1529]: I20260123 18:57:24.126236 1529 update_attempter.cc:509] Updating boot flags... Jan 23 18:57:28.889305 kubelet[2810]: I0123 18:57:28.889220 2810 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:57:28.890320 kubelet[2810]: I0123 18:57:28.890008 2810 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:57:28.890451 containerd[1545]: time="2026-01-23T18:57:28.889769615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:57:29.043197 kubelet[2810]: I0123 18:57:29.043098 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" podStartSLOduration=7.043076968 podStartE2EDuration="7.043076968s" podCreationTimestamp="2026-01-23 18:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:23.900936406 +0000 UTC m=+1.338291777" watchObservedRunningTime="2026-01-23 18:57:29.043076968 +0000 UTC m=+6.480432338" Jan 23 18:57:30.010470 systemd[1]: Created slice kubepods-besteffort-poda797e2ca_ab4d_47c0_a5fc_9c77e8464b2a.slice - libcontainer container kubepods-besteffort-poda797e2ca_ab4d_47c0_a5fc_9c77e8464b2a.slice. Jan 23 18:57:30.073308 systemd[1]: Created slice kubepods-besteffort-pod7372860e_d7b7_4c8e_b699_fa4e280106bc.slice - libcontainer container kubepods-besteffort-pod7372860e_d7b7_4c8e_b699_fa4e280106bc.slice. Jan 23 18:57:30.083385 kubelet[2810]: I0123 18:57:30.083336 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a-xtables-lock\") pod \"kube-proxy-sh6tz\" (UID: \"a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a\") " pod="kube-system/kube-proxy-sh6tz" Jan 23 18:57:30.083994 kubelet[2810]: I0123 18:57:30.083520 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgxgx\" (UniqueName: \"kubernetes.io/projected/a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a-kube-api-access-qgxgx\") pod \"kube-proxy-sh6tz\" (UID: \"a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a\") " pod="kube-system/kube-proxy-sh6tz" Jan 23 18:57:30.083994 kubelet[2810]: I0123 18:57:30.083586 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a-kube-proxy\") pod \"kube-proxy-sh6tz\" (UID: \"a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a\") " pod="kube-system/kube-proxy-sh6tz" Jan 23 18:57:30.083994 kubelet[2810]: I0123 18:57:30.083616 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a-lib-modules\") pod \"kube-proxy-sh6tz\" (UID: \"a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a\") " pod="kube-system/kube-proxy-sh6tz" Jan 23 18:57:30.083994 kubelet[2810]: I0123 18:57:30.083798 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7372860e-d7b7-4c8e-b699-fa4e280106bc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-49pqx\" (UID: \"7372860e-d7b7-4c8e-b699-fa4e280106bc\") " pod="tigera-operator/tigera-operator-7dcd859c48-49pqx" Jan 23 18:57:30.083994 kubelet[2810]: I0123 18:57:30.083857 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2q9\" (UniqueName: \"kubernetes.io/projected/7372860e-d7b7-4c8e-b699-fa4e280106bc-kube-api-access-dk2q9\") pod \"tigera-operator-7dcd859c48-49pqx\" (UID: \"7372860e-d7b7-4c8e-b699-fa4e280106bc\") " pod="tigera-operator/tigera-operator-7dcd859c48-49pqx" Jan 23 18:57:30.323277 containerd[1545]: time="2026-01-23T18:57:30.323232211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sh6tz,Uid:a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:30.350399 containerd[1545]: time="2026-01-23T18:57:30.350281449Z" level=info msg="connecting to shim 7539c71b0b7032c25d0c52d4842e8f0c8a9087dba1b806cb420629e0a516408c" address="unix:///run/containerd/s/4bbefb0e99cc0f41dbabee4aec2f12eb3103d604663c37883ce7c2d08ff13ae1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:30.382858 containerd[1545]: time="2026-01-23T18:57:30.382752961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-49pqx,Uid:7372860e-d7b7-4c8e-b699-fa4e280106bc,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:57:30.388556 systemd[1]: Started cri-containerd-7539c71b0b7032c25d0c52d4842e8f0c8a9087dba1b806cb420629e0a516408c.scope - libcontainer container 7539c71b0b7032c25d0c52d4842e8f0c8a9087dba1b806cb420629e0a516408c. Jan 23 18:57:30.420278 containerd[1545]: time="2026-01-23T18:57:30.420226209Z" level=info msg="connecting to shim a0ffed2b8a5702947fc35c92256c46e4d26ff3e42f6de5cca9a27d7fca640f43" address="unix:///run/containerd/s/c571f60eddc35a9076f6a15830d7540736bbcd759c40ec6d788fb9a31e521016" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:30.456317 containerd[1545]: time="2026-01-23T18:57:30.456259368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sh6tz,Uid:a797e2ca-ab4d-47c0-a5fc-9c77e8464b2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7539c71b0b7032c25d0c52d4842e8f0c8a9087dba1b806cb420629e0a516408c\"" Jan 23 18:57:30.466548 systemd[1]: Started cri-containerd-a0ffed2b8a5702947fc35c92256c46e4d26ff3e42f6de5cca9a27d7fca640f43.scope - libcontainer container a0ffed2b8a5702947fc35c92256c46e4d26ff3e42f6de5cca9a27d7fca640f43. Jan 23 18:57:30.470982 containerd[1545]: time="2026-01-23T18:57:30.467879780Z" level=info msg="CreateContainer within sandbox \"7539c71b0b7032c25d0c52d4842e8f0c8a9087dba1b806cb420629e0a516408c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:57:30.500199 containerd[1545]: time="2026-01-23T18:57:30.499187271Z" level=info msg="Container aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:30.512536 containerd[1545]: time="2026-01-23T18:57:30.512472784Z" level=info msg="CreateContainer within sandbox \"7539c71b0b7032c25d0c52d4842e8f0c8a9087dba1b806cb420629e0a516408c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d\"" Jan 23 18:57:30.513980 containerd[1545]: time="2026-01-23T18:57:30.513944104Z" level=info msg="StartContainer for \"aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d\"" Jan 23 18:57:30.517728 containerd[1545]: time="2026-01-23T18:57:30.517675509Z" level=info msg="connecting to shim aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d" address="unix:///run/containerd/s/4bbefb0e99cc0f41dbabee4aec2f12eb3103d604663c37883ce7c2d08ff13ae1" protocol=ttrpc version=3 Jan 23 18:57:30.556512 systemd[1]: Started cri-containerd-aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d.scope - libcontainer container aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d. Jan 23 18:57:30.571431 containerd[1545]: time="2026-01-23T18:57:30.571342506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-49pqx,Uid:7372860e-d7b7-4c8e-b699-fa4e280106bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a0ffed2b8a5702947fc35c92256c46e4d26ff3e42f6de5cca9a27d7fca640f43\"" Jan 23 18:57:30.576041 containerd[1545]: time="2026-01-23T18:57:30.575925781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:57:30.665917 containerd[1545]: time="2026-01-23T18:57:30.665786906Z" level=info msg="StartContainer for \"aed4ca896256293e25ae3282bbf5dff4fec2e88d3b2e0791e53270ff9af1123d\" returns successfully" Jan 23 18:57:31.549700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117257876.mount: Deactivated successfully. Jan 23 18:57:31.564877 kubelet[2810]: I0123 18:57:31.564712 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sh6tz" podStartSLOduration=2.564686899 podStartE2EDuration="2.564686899s" podCreationTimestamp="2026-01-23 18:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:30.888231547 +0000 UTC m=+8.325586917" watchObservedRunningTime="2026-01-23 18:57:31.564686899 +0000 UTC m=+9.002042270" Jan 23 18:57:32.494861 containerd[1545]: time="2026-01-23T18:57:32.494782443Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:32.496199 containerd[1545]: time="2026-01-23T18:57:32.496139812Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 18:57:32.497772 containerd[1545]: time="2026-01-23T18:57:32.497690059Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:32.500629 containerd[1545]: time="2026-01-23T18:57:32.500561989Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:32.502089 containerd[1545]: time="2026-01-23T18:57:32.501468436Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.925216337s" Jan 23 18:57:32.502089 containerd[1545]: time="2026-01-23T18:57:32.501516502Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:57:32.507056 containerd[1545]: time="2026-01-23T18:57:32.506996403Z" level=info msg="CreateContainer within sandbox \"a0ffed2b8a5702947fc35c92256c46e4d26ff3e42f6de5cca9a27d7fca640f43\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:57:32.521343 containerd[1545]: time="2026-01-23T18:57:32.521290720Z" level=info msg="Container 1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:32.531895 containerd[1545]: time="2026-01-23T18:57:32.531839725Z" level=info msg="CreateContainer within sandbox \"a0ffed2b8a5702947fc35c92256c46e4d26ff3e42f6de5cca9a27d7fca640f43\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1\"" Jan 23 18:57:32.533025 containerd[1545]: time="2026-01-23T18:57:32.532986562Z" level=info msg="StartContainer for \"1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1\"" Jan 23 18:57:32.535138 containerd[1545]: time="2026-01-23T18:57:32.535079053Z" level=info msg="connecting to shim 1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1" address="unix:///run/containerd/s/c571f60eddc35a9076f6a15830d7540736bbcd759c40ec6d788fb9a31e521016" protocol=ttrpc version=3 Jan 23 18:57:32.570545 systemd[1]: Started cri-containerd-1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1.scope - libcontainer container 1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1. Jan 23 18:57:32.613888 containerd[1545]: time="2026-01-23T18:57:32.613835844Z" level=info msg="StartContainer for \"1b688a06288d6ad40a3f948180900c18c7b6509b2c9962262e095a301787cfb1\" returns successfully" Jan 23 18:57:32.893887 kubelet[2810]: I0123 18:57:32.893103 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-49pqx" podStartSLOduration=0.96538501 podStartE2EDuration="2.893081602s" podCreationTimestamp="2026-01-23 18:57:30 +0000 UTC" firstStartedPulling="2026-01-23 18:57:30.575072977 +0000 UTC m=+8.012428325" lastFinishedPulling="2026-01-23 18:57:32.502769563 +0000 UTC m=+9.940124917" observedRunningTime="2026-01-23 18:57:32.89259953 +0000 UTC m=+10.329954901" watchObservedRunningTime="2026-01-23 18:57:32.893081602 +0000 UTC m=+10.330436972" Jan 23 18:57:39.736543 sudo[1887]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:39.771205 sshd[1886]: Connection closed by 4.153.228.146 port 44882 Jan 23 18:57:39.772072 sshd-session[1883]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:39.781677 systemd[1]: sshd@9-10.128.0.7:22-4.153.228.146:44882.service: Deactivated successfully. Jan 23 18:57:39.785966 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:57:39.787580 systemd[1]: session-9.scope: Consumed 6.758s CPU time, 232.7M memory peak. Jan 23 18:57:39.790020 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:57:39.796454 systemd-logind[1526]: Removed session 9. Jan 23 18:57:47.699617 kubelet[2810]: I0123 18:57:47.699572 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26lrm\" (UniqueName: \"kubernetes.io/projected/77f5c8b3-2e23-4d73-855a-9e748a4712d8-kube-api-access-26lrm\") pod \"calico-typha-679df78b8-5qbxp\" (UID: \"77f5c8b3-2e23-4d73-855a-9e748a4712d8\") " pod="calico-system/calico-typha-679df78b8-5qbxp" Jan 23 18:57:47.700912 kubelet[2810]: I0123 18:57:47.700253 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f5c8b3-2e23-4d73-855a-9e748a4712d8-tigera-ca-bundle\") pod \"calico-typha-679df78b8-5qbxp\" (UID: \"77f5c8b3-2e23-4d73-855a-9e748a4712d8\") " pod="calico-system/calico-typha-679df78b8-5qbxp" Jan 23 18:57:47.700912 kubelet[2810]: I0123 18:57:47.700295 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77f5c8b3-2e23-4d73-855a-9e748a4712d8-typha-certs\") pod \"calico-typha-679df78b8-5qbxp\" (UID: \"77f5c8b3-2e23-4d73-855a-9e748a4712d8\") " pod="calico-system/calico-typha-679df78b8-5qbxp" Jan 23 18:57:47.711069 systemd[1]: Created slice kubepods-besteffort-pod77f5c8b3_2e23_4d73_855a_9e748a4712d8.slice - libcontainer container kubepods-besteffort-pod77f5c8b3_2e23_4d73_855a_9e748a4712d8.slice. Jan 23 18:57:48.030298 containerd[1545]: time="2026-01-23T18:57:48.028607278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-679df78b8-5qbxp,Uid:77f5c8b3-2e23-4d73-855a-9e748a4712d8,Namespace:calico-system,Attempt:0,}" Jan 23 18:57:48.041760 systemd[1]: Created slice kubepods-besteffort-podc8b58ce0_9b0e_4085_85e2_2ef5e5c96603.slice - libcontainer container kubepods-besteffort-podc8b58ce0_9b0e_4085_85e2_2ef5e5c96603.slice. Jan 23 18:57:48.080871 containerd[1545]: time="2026-01-23T18:57:48.080813815Z" level=info msg="connecting to shim f6e2280c652bd3861ed551ec3533c13f5ec92a4b78924a3efa9fa350bf35e274" address="unix:///run/containerd/s/4801715b9dd8c34ca64ea45334a295b750b39da7f86cd0a557d6521afe3402fd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:48.105506 kubelet[2810]: I0123 18:57:48.104539 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-cni-bin-dir\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105506 kubelet[2810]: I0123 18:57:48.104617 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-tigera-ca-bundle\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105506 kubelet[2810]: I0123 18:57:48.104649 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-cni-log-dir\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105506 kubelet[2810]: I0123 18:57:48.104681 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-var-lib-calico\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105506 kubelet[2810]: I0123 18:57:48.104714 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-node-certs\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105887 kubelet[2810]: I0123 18:57:48.104739 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-policysync\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105887 kubelet[2810]: I0123 18:57:48.104768 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-cni-net-dir\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105887 kubelet[2810]: I0123 18:57:48.104795 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-var-run-calico\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105887 kubelet[2810]: I0123 18:57:48.104821 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zjhv\" (UniqueName: \"kubernetes.io/projected/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-kube-api-access-4zjhv\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.105887 kubelet[2810]: I0123 18:57:48.104856 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-flexvol-driver-host\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.106149 kubelet[2810]: I0123 18:57:48.104885 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-lib-modules\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.106149 kubelet[2810]: I0123 18:57:48.104912 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8b58ce0-9b0e-4085-85e2-2ef5e5c96603-xtables-lock\") pod \"calico-node-mb4kn\" (UID: \"c8b58ce0-9b0e-4085-85e2-2ef5e5c96603\") " pod="calico-system/calico-node-mb4kn" Jan 23 18:57:48.120654 systemd[1]: Started cri-containerd-f6e2280c652bd3861ed551ec3533c13f5ec92a4b78924a3efa9fa350bf35e274.scope - libcontainer container f6e2280c652bd3861ed551ec3533c13f5ec92a4b78924a3efa9fa350bf35e274. Jan 23 18:57:48.199522 containerd[1545]: time="2026-01-23T18:57:48.199444460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-679df78b8-5qbxp,Uid:77f5c8b3-2e23-4d73-855a-9e748a4712d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6e2280c652bd3861ed551ec3533c13f5ec92a4b78924a3efa9fa350bf35e274\"" Jan 23 18:57:48.201855 containerd[1545]: time="2026-01-23T18:57:48.201796479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:57:48.220200 kubelet[2810]: E0123 18:57:48.218461 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.220200 kubelet[2810]: W0123 18:57:48.218494 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.220200 kubelet[2810]: E0123 18:57:48.218540 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.238036 kubelet[2810]: E0123 18:57:48.237823 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.238036 kubelet[2810]: W0123 18:57:48.237883 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.239184 kubelet[2810]: E0123 18:57:48.237915 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.277223 kubelet[2810]: E0123 18:57:48.276728 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:57:48.298160 kubelet[2810]: E0123 18:57:48.297030 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.298407 kubelet[2810]: W0123 18:57:48.298379 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.299493 kubelet[2810]: E0123 18:57:48.299431 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.300148 kubelet[2810]: E0123 18:57:48.300076 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.300382 kubelet[2810]: W0123 18:57:48.300128 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.300382 kubelet[2810]: E0123 18:57:48.300331 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.301217 kubelet[2810]: E0123 18:57:48.301193 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.301217 kubelet[2810]: W0123 18:57:48.301216 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.301217 kubelet[2810]: E0123 18:57:48.301236 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.302629 kubelet[2810]: E0123 18:57:48.302600 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.302629 kubelet[2810]: W0123 18:57:48.302627 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.302894 kubelet[2810]: E0123 18:57:48.302647 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.304732 kubelet[2810]: E0123 18:57:48.304706 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.304732 kubelet[2810]: W0123 18:57:48.304731 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.304891 kubelet[2810]: E0123 18:57:48.304749 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.305109 kubelet[2810]: E0123 18:57:48.305043 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.305109 kubelet[2810]: W0123 18:57:48.305059 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.305109 kubelet[2810]: E0123 18:57:48.305106 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.306642 kubelet[2810]: E0123 18:57:48.306616 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.306642 kubelet[2810]: W0123 18:57:48.306642 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.307024 kubelet[2810]: E0123 18:57:48.306660 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.307297 kubelet[2810]: E0123 18:57:48.307255 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.307297 kubelet[2810]: W0123 18:57:48.307278 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.307297 kubelet[2810]: E0123 18:57:48.307297 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.308225 kubelet[2810]: E0123 18:57:48.308046 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.308225 kubelet[2810]: W0123 18:57:48.308067 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.308613 kubelet[2810]: E0123 18:57:48.308086 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.309207 kubelet[2810]: E0123 18:57:48.308913 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.309207 kubelet[2810]: W0123 18:57:48.308928 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.309207 kubelet[2810]: E0123 18:57:48.308945 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.309791 kubelet[2810]: E0123 18:57:48.309750 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.309868 kubelet[2810]: W0123 18:57:48.309799 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.309868 kubelet[2810]: E0123 18:57:48.309819 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.310266 kubelet[2810]: E0123 18:57:48.310241 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.310357 kubelet[2810]: W0123 18:57:48.310282 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.310357 kubelet[2810]: E0123 18:57:48.310301 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.310799 kubelet[2810]: E0123 18:57:48.310765 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.310799 kubelet[2810]: W0123 18:57:48.310798 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.310955 kubelet[2810]: E0123 18:57:48.310815 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.311265 kubelet[2810]: E0123 18:57:48.311241 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.311265 kubelet[2810]: W0123 18:57:48.311262 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.311988 kubelet[2810]: E0123 18:57:48.311306 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.311988 kubelet[2810]: E0123 18:57:48.311925 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.312746 kubelet[2810]: W0123 18:57:48.311941 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.312746 kubelet[2810]: E0123 18:57:48.312620 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.313040 kubelet[2810]: E0123 18:57:48.312984 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.313040 kubelet[2810]: W0123 18:57:48.312999 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.313040 kubelet[2810]: E0123 18:57:48.313016 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.313751 kubelet[2810]: E0123 18:57:48.313664 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.313751 kubelet[2810]: W0123 18:57:48.313723 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.313751 kubelet[2810]: E0123 18:57:48.313745 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.315400 kubelet[2810]: E0123 18:57:48.315356 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.315500 kubelet[2810]: W0123 18:57:48.315413 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.315500 kubelet[2810]: E0123 18:57:48.315433 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.315836 kubelet[2810]: E0123 18:57:48.315813 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.315918 kubelet[2810]: W0123 18:57:48.315853 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.315918 kubelet[2810]: E0123 18:57:48.315897 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.316360 kubelet[2810]: E0123 18:57:48.316324 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.316505 kubelet[2810]: W0123 18:57:48.316480 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.316586 kubelet[2810]: E0123 18:57:48.316510 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.317700 kubelet[2810]: E0123 18:57:48.317673 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.317700 kubelet[2810]: W0123 18:57:48.317697 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.317871 kubelet[2810]: E0123 18:57:48.317716 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.318255 kubelet[2810]: I0123 18:57:48.318219 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1aa00049-b6aa-4c4a-9b9a-78530a9aeb40-kubelet-dir\") pod \"csi-node-driver-g5nws\" (UID: \"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40\") " pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:48.318594 kubelet[2810]: E0123 18:57:48.318566 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.318594 kubelet[2810]: W0123 18:57:48.318592 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.318739 kubelet[2810]: E0123 18:57:48.318610 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.319143 kubelet[2810]: I0123 18:57:48.318937 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrwww\" (UniqueName: \"kubernetes.io/projected/1aa00049-b6aa-4c4a-9b9a-78530a9aeb40-kube-api-access-vrwww\") pod \"csi-node-driver-g5nws\" (UID: \"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40\") " pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:48.319506 kubelet[2810]: E0123 18:57:48.319463 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.319506 kubelet[2810]: W0123 18:57:48.319485 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.319506 kubelet[2810]: E0123 18:57:48.319502 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.320625 kubelet[2810]: E0123 18:57:48.320195 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.320625 kubelet[2810]: W0123 18:57:48.320214 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.320625 kubelet[2810]: E0123 18:57:48.320231 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.320825 kubelet[2810]: E0123 18:57:48.320712 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.320825 kubelet[2810]: W0123 18:57:48.320726 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.320825 kubelet[2810]: E0123 18:57:48.320745 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.320825 kubelet[2810]: I0123 18:57:48.320794 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1aa00049-b6aa-4c4a-9b9a-78530a9aeb40-registration-dir\") pod \"csi-node-driver-g5nws\" (UID: \"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40\") " pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:48.322191 kubelet[2810]: E0123 18:57:48.321154 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.322191 kubelet[2810]: W0123 18:57:48.321199 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.322191 kubelet[2810]: E0123 18:57:48.321216 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.322191 kubelet[2810]: E0123 18:57:48.321554 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.322191 kubelet[2810]: W0123 18:57:48.321568 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.322191 kubelet[2810]: E0123 18:57:48.321585 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.322191 kubelet[2810]: E0123 18:57:48.321912 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.322191 kubelet[2810]: W0123 18:57:48.321927 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.322191 kubelet[2810]: E0123 18:57:48.321943 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.322694 kubelet[2810]: I0123 18:57:48.321988 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1aa00049-b6aa-4c4a-9b9a-78530a9aeb40-socket-dir\") pod \"csi-node-driver-g5nws\" (UID: \"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40\") " pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:48.322694 kubelet[2810]: E0123 18:57:48.322352 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.322694 kubelet[2810]: W0123 18:57:48.322379 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.322694 kubelet[2810]: E0123 18:57:48.322396 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.322694 kubelet[2810]: I0123 18:57:48.322439 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1aa00049-b6aa-4c4a-9b9a-78530a9aeb40-varrun\") pod \"csi-node-driver-g5nws\" (UID: \"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40\") " pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:48.322948 kubelet[2810]: E0123 18:57:48.322797 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.322948 kubelet[2810]: W0123 18:57:48.322811 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.322948 kubelet[2810]: E0123 18:57:48.322827 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.323154 kubelet[2810]: E0123 18:57:48.323131 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.323246 kubelet[2810]: W0123 18:57:48.323155 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.323246 kubelet[2810]: E0123 18:57:48.323198 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.323582 kubelet[2810]: E0123 18:57:48.323555 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.323582 kubelet[2810]: W0123 18:57:48.323579 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.323726 kubelet[2810]: E0123 18:57:48.323604 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.323948 kubelet[2810]: E0123 18:57:48.323922 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.323948 kubelet[2810]: W0123 18:57:48.323946 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.324071 kubelet[2810]: E0123 18:57:48.323962 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.324401 kubelet[2810]: E0123 18:57:48.324374 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.324401 kubelet[2810]: W0123 18:57:48.324399 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.324557 kubelet[2810]: E0123 18:57:48.324416 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.324902 kubelet[2810]: E0123 18:57:48.324702 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.324902 kubelet[2810]: W0123 18:57:48.324719 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.324902 kubelet[2810]: E0123 18:57:48.324734 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.349247 containerd[1545]: time="2026-01-23T18:57:48.349195743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mb4kn,Uid:c8b58ce0-9b0e-4085-85e2-2ef5e5c96603,Namespace:calico-system,Attempt:0,}" Jan 23 18:57:48.385700 containerd[1545]: time="2026-01-23T18:57:48.384912622Z" level=info msg="connecting to shim 775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8" address="unix:///run/containerd/s/b71283d733386b9a17ce42a51abd06a2ccaa5b6fc55e9ee2a8ee4c3c394eb1fc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:48.418423 systemd[1]: Started cri-containerd-775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8.scope - libcontainer container 775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8. Jan 23 18:57:48.424359 kubelet[2810]: E0123 18:57:48.424328 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.424614 kubelet[2810]: W0123 18:57:48.424540 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.424956 kubelet[2810]: E0123 18:57:48.424788 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.425709 kubelet[2810]: E0123 18:57:48.425639 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.425709 kubelet[2810]: W0123 18:57:48.425663 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.426071 kubelet[2810]: E0123 18:57:48.425918 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.426632 kubelet[2810]: E0123 18:57:48.426580 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.426632 kubelet[2810]: W0123 18:57:48.426599 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.426910 kubelet[2810]: E0123 18:57:48.426793 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.427319 kubelet[2810]: E0123 18:57:48.427247 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.427319 kubelet[2810]: W0123 18:57:48.427282 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.427319 kubelet[2810]: E0123 18:57:48.427300 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.428009 kubelet[2810]: E0123 18:57:48.427950 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.428009 kubelet[2810]: W0123 18:57:48.427969 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.428009 kubelet[2810]: E0123 18:57:48.427988 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.428719 kubelet[2810]: E0123 18:57:48.428663 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.428719 kubelet[2810]: W0123 18:57:48.428682 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.428719 kubelet[2810]: E0123 18:57:48.428699 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.429351 kubelet[2810]: E0123 18:57:48.429295 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.429351 kubelet[2810]: W0123 18:57:48.429313 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.429351 kubelet[2810]: E0123 18:57:48.429332 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.430217 kubelet[2810]: E0123 18:57:48.430120 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.430217 kubelet[2810]: W0123 18:57:48.430138 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.430551 kubelet[2810]: E0123 18:57:48.430157 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.431119 kubelet[2810]: E0123 18:57:48.431053 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.431119 kubelet[2810]: W0123 18:57:48.431072 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.431119 kubelet[2810]: E0123 18:57:48.431090 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.431874 kubelet[2810]: E0123 18:57:48.431813 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.431874 kubelet[2810]: W0123 18:57:48.431834 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.431874 kubelet[2810]: E0123 18:57:48.431852 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.432536 kubelet[2810]: E0123 18:57:48.432437 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.432536 kubelet[2810]: W0123 18:57:48.432494 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.432536 kubelet[2810]: E0123 18:57:48.432514 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.433793 kubelet[2810]: E0123 18:57:48.433686 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.433793 kubelet[2810]: W0123 18:57:48.433705 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.433793 kubelet[2810]: E0123 18:57:48.433723 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.434334 kubelet[2810]: E0123 18:57:48.434314 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.434747 kubelet[2810]: W0123 18:57:48.434445 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.434747 kubelet[2810]: E0123 18:57:48.434470 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.435435 kubelet[2810]: E0123 18:57:48.435378 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.435723 kubelet[2810]: W0123 18:57:48.435549 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.435723 kubelet[2810]: E0123 18:57:48.435577 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.436283 kubelet[2810]: E0123 18:57:48.436215 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.436283 kubelet[2810]: W0123 18:57:48.436235 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.436283 kubelet[2810]: E0123 18:57:48.436253 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.437432 kubelet[2810]: E0123 18:57:48.437286 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.437432 kubelet[2810]: W0123 18:57:48.437305 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.437598 kubelet[2810]: E0123 18:57:48.437327 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.438576 kubelet[2810]: E0123 18:57:48.438362 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.438576 kubelet[2810]: W0123 18:57:48.438531 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.438576 kubelet[2810]: E0123 18:57:48.438555 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.439475 kubelet[2810]: E0123 18:57:48.439297 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.439475 kubelet[2810]: W0123 18:57:48.439429 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.439475 kubelet[2810]: E0123 18:57:48.439454 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.440118 kubelet[2810]: E0123 18:57:48.440063 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.440118 kubelet[2810]: W0123 18:57:48.440080 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.440461 kubelet[2810]: E0123 18:57:48.440310 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.440995 kubelet[2810]: E0123 18:57:48.440944 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.440995 kubelet[2810]: W0123 18:57:48.440963 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.441304 kubelet[2810]: E0123 18:57:48.441156 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.441929 kubelet[2810]: E0123 18:57:48.441848 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.441929 kubelet[2810]: W0123 18:57:48.441867 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.441929 kubelet[2810]: E0123 18:57:48.441884 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.442898 kubelet[2810]: E0123 18:57:48.442836 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.442898 kubelet[2810]: W0123 18:57:48.442856 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.443589 kubelet[2810]: E0123 18:57:48.443095 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.443869 kubelet[2810]: E0123 18:57:48.443848 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.443869 kubelet[2810]: W0123 18:57:48.443869 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.444328 kubelet[2810]: E0123 18:57:48.443886 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.444328 kubelet[2810]: E0123 18:57:48.444262 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.444328 kubelet[2810]: W0123 18:57:48.444278 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.444328 kubelet[2810]: E0123 18:57:48.444295 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.446768 kubelet[2810]: E0123 18:57:48.445241 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.446768 kubelet[2810]: W0123 18:57:48.446687 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.446768 kubelet[2810]: E0123 18:57:48.446715 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.462543 kubelet[2810]: E0123 18:57:48.462427 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:48.462543 kubelet[2810]: W0123 18:57:48.462455 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:48.462543 kubelet[2810]: E0123 18:57:48.462487 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:48.483709 containerd[1545]: time="2026-01-23T18:57:48.483638153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mb4kn,Uid:c8b58ce0-9b0e-4085-85e2-2ef5e5c96603,Namespace:calico-system,Attempt:0,} returns sandbox id \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\"" Jan 23 18:57:49.214872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111882970.mount: Deactivated successfully. Jan 23 18:57:49.804915 kubelet[2810]: E0123 18:57:49.804863 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:57:50.309937 containerd[1545]: time="2026-01-23T18:57:50.309869037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:50.311254 containerd[1545]: time="2026-01-23T18:57:50.311196159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 18:57:50.313056 containerd[1545]: time="2026-01-23T18:57:50.312906921Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:50.316546 containerd[1545]: time="2026-01-23T18:57:50.316460774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:50.317928 containerd[1545]: time="2026-01-23T18:57:50.317728842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.1156753s" Jan 23 18:57:50.317928 containerd[1545]: time="2026-01-23T18:57:50.317773291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:57:50.322148 containerd[1545]: time="2026-01-23T18:57:50.321694578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:57:50.352904 containerd[1545]: time="2026-01-23T18:57:50.352486982Z" level=info msg="CreateContainer within sandbox \"f6e2280c652bd3861ed551ec3533c13f5ec92a4b78924a3efa9fa350bf35e274\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:57:50.365901 containerd[1545]: time="2026-01-23T18:57:50.362735543Z" level=info msg="Container 22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:50.377803 containerd[1545]: time="2026-01-23T18:57:50.377727270Z" level=info msg="CreateContainer within sandbox \"f6e2280c652bd3861ed551ec3533c13f5ec92a4b78924a3efa9fa350bf35e274\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57\"" Jan 23 18:57:50.379710 containerd[1545]: time="2026-01-23T18:57:50.379380664Z" level=info msg="StartContainer for \"22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57\"" Jan 23 18:57:50.382326 containerd[1545]: time="2026-01-23T18:57:50.382290454Z" level=info msg="connecting to shim 22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57" address="unix:///run/containerd/s/4801715b9dd8c34ca64ea45334a295b750b39da7f86cd0a557d6521afe3402fd" protocol=ttrpc version=3 Jan 23 18:57:50.434426 systemd[1]: Started cri-containerd-22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57.scope - libcontainer container 22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57. Jan 23 18:57:50.524345 containerd[1545]: time="2026-01-23T18:57:50.524242983Z" level=info msg="StartContainer for \"22e03c3c7c85656942f0d218520691510b2d9e1ee8219bbc7012d5cfac650d57\" returns successfully" Jan 23 18:57:51.034193 kubelet[2810]: E0123 18:57:51.034140 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.035276 kubelet[2810]: W0123 18:57:51.035232 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.035418 kubelet[2810]: E0123 18:57:51.035304 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.035705 kubelet[2810]: E0123 18:57:51.035679 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.035784 kubelet[2810]: W0123 18:57:51.035709 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.035784 kubelet[2810]: E0123 18:57:51.035728 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.036404 kubelet[2810]: E0123 18:57:51.036376 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.036404 kubelet[2810]: W0123 18:57:51.036400 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.036565 kubelet[2810]: E0123 18:57:51.036418 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.037653 kubelet[2810]: E0123 18:57:51.037624 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.037653 kubelet[2810]: W0123 18:57:51.037648 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.037825 kubelet[2810]: E0123 18:57:51.037666 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.039457 kubelet[2810]: E0123 18:57:51.039427 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.039457 kubelet[2810]: W0123 18:57:51.039455 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.039630 kubelet[2810]: E0123 18:57:51.039473 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.039851 kubelet[2810]: E0123 18:57:51.039827 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.039851 kubelet[2810]: W0123 18:57:51.039848 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.039984 kubelet[2810]: E0123 18:57:51.039866 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.040305 kubelet[2810]: E0123 18:57:51.040282 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.040407 kubelet[2810]: W0123 18:57:51.040303 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.040407 kubelet[2810]: E0123 18:57:51.040340 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.041563 kubelet[2810]: E0123 18:57:51.041515 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.041563 kubelet[2810]: W0123 18:57:51.041559 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.041725 kubelet[2810]: E0123 18:57:51.041577 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.042228 kubelet[2810]: E0123 18:57:51.042201 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.044187 kubelet[2810]: W0123 18:57:51.043219 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.044187 kubelet[2810]: E0123 18:57:51.043252 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.044187 kubelet[2810]: E0123 18:57:51.043588 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.044187 kubelet[2810]: W0123 18:57:51.043603 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.044187 kubelet[2810]: E0123 18:57:51.043629 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.044187 kubelet[2810]: E0123 18:57:51.043935 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.044187 kubelet[2810]: W0123 18:57:51.043953 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.044187 kubelet[2810]: E0123 18:57:51.043969 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.045464 kubelet[2810]: E0123 18:57:51.045424 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.045464 kubelet[2810]: W0123 18:57:51.045461 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.045621 kubelet[2810]: E0123 18:57:51.045479 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.045806 kubelet[2810]: E0123 18:57:51.045783 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.045806 kubelet[2810]: W0123 18:57:51.045805 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.045943 kubelet[2810]: E0123 18:57:51.045822 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.046593 kubelet[2810]: E0123 18:57:51.046566 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.046593 kubelet[2810]: W0123 18:57:51.046590 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.046747 kubelet[2810]: E0123 18:57:51.046607 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.047325 kubelet[2810]: E0123 18:57:51.047298 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.047325 kubelet[2810]: W0123 18:57:51.047322 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.047478 kubelet[2810]: E0123 18:57:51.047339 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.057467 kubelet[2810]: E0123 18:57:51.057433 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.057467 kubelet[2810]: W0123 18:57:51.057462 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.057651 kubelet[2810]: E0123 18:57:51.057485 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.058447 kubelet[2810]: E0123 18:57:51.058418 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.058447 kubelet[2810]: W0123 18:57:51.058442 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.058625 kubelet[2810]: E0123 18:57:51.058461 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.059251 kubelet[2810]: E0123 18:57:51.059221 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.059251 kubelet[2810]: W0123 18:57:51.059247 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.059419 kubelet[2810]: E0123 18:57:51.059265 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.060324 kubelet[2810]: E0123 18:57:51.060298 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.060324 kubelet[2810]: W0123 18:57:51.060322 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.060481 kubelet[2810]: E0123 18:57:51.060340 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.060702 kubelet[2810]: E0123 18:57:51.060678 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.060702 kubelet[2810]: W0123 18:57:51.060700 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.060827 kubelet[2810]: E0123 18:57:51.060717 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.062692 kubelet[2810]: E0123 18:57:51.062662 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.062692 kubelet[2810]: W0123 18:57:51.062689 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.062840 kubelet[2810]: E0123 18:57:51.062707 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.063155 kubelet[2810]: E0123 18:57:51.063120 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.063155 kubelet[2810]: W0123 18:57:51.063143 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.063317 kubelet[2810]: E0123 18:57:51.063160 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.064186 kubelet[2810]: E0123 18:57:51.064126 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.064186 kubelet[2810]: W0123 18:57:51.064149 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.064186 kubelet[2810]: E0123 18:57:51.064182 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.065188 kubelet[2810]: E0123 18:57:51.065142 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.065295 kubelet[2810]: W0123 18:57:51.065180 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.065295 kubelet[2810]: E0123 18:57:51.065209 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.065954 kubelet[2810]: E0123 18:57:51.065928 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.065954 kubelet[2810]: W0123 18:57:51.065951 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.066093 kubelet[2810]: E0123 18:57:51.065972 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.066908 kubelet[2810]: E0123 18:57:51.066854 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.066908 kubelet[2810]: W0123 18:57:51.066879 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.066908 kubelet[2810]: E0123 18:57:51.066907 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.068509 kubelet[2810]: E0123 18:57:51.068475 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.068509 kubelet[2810]: W0123 18:57:51.068501 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.068674 kubelet[2810]: E0123 18:57:51.068518 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.069097 kubelet[2810]: E0123 18:57:51.069070 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.069097 kubelet[2810]: W0123 18:57:51.069096 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.069275 kubelet[2810]: E0123 18:57:51.069113 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.069826 kubelet[2810]: E0123 18:57:51.069797 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.069826 kubelet[2810]: W0123 18:57:51.069822 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.069961 kubelet[2810]: E0123 18:57:51.069945 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.070787 kubelet[2810]: E0123 18:57:51.070757 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.070787 kubelet[2810]: W0123 18:57:51.070782 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.070932 kubelet[2810]: E0123 18:57:51.070800 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.073694 kubelet[2810]: E0123 18:57:51.073579 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.074224 kubelet[2810]: W0123 18:57:51.073799 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.074224 kubelet[2810]: E0123 18:57:51.073828 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.075433 kubelet[2810]: E0123 18:57:51.075413 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.075570 kubelet[2810]: W0123 18:57:51.075550 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.075677 kubelet[2810]: E0123 18:57:51.075660 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.076188 kubelet[2810]: E0123 18:57:51.076146 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:57:51.076357 kubelet[2810]: W0123 18:57:51.076300 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:57:51.076357 kubelet[2810]: E0123 18:57:51.076325 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:57:51.310285 containerd[1545]: time="2026-01-23T18:57:51.309995406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:51.311795 containerd[1545]: time="2026-01-23T18:57:51.311664929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 18:57:51.313084 containerd[1545]: time="2026-01-23T18:57:51.313036439Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:51.317356 containerd[1545]: time="2026-01-23T18:57:51.316893707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:51.318652 containerd[1545]: time="2026-01-23T18:57:51.318599580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 996.651299ms" Jan 23 18:57:51.318652 containerd[1545]: time="2026-01-23T18:57:51.318649727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:57:51.327913 containerd[1545]: time="2026-01-23T18:57:51.327847884Z" level=info msg="CreateContainer within sandbox \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:57:51.344116 containerd[1545]: time="2026-01-23T18:57:51.341186857Z" level=info msg="Container ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:51.358286 containerd[1545]: time="2026-01-23T18:57:51.358218410Z" level=info msg="CreateContainer within sandbox \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6\"" Jan 23 18:57:51.359290 containerd[1545]: time="2026-01-23T18:57:51.359256241Z" level=info msg="StartContainer for \"ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6\"" Jan 23 18:57:51.361568 containerd[1545]: time="2026-01-23T18:57:51.361483719Z" level=info msg="connecting to shim ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6" address="unix:///run/containerd/s/b71283d733386b9a17ce42a51abd06a2ccaa5b6fc55e9ee2a8ee4c3c394eb1fc" protocol=ttrpc version=3 Jan 23 18:57:51.398499 systemd[1]: Started cri-containerd-ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6.scope - libcontainer container ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6. Jan 23 18:57:51.489865 containerd[1545]: time="2026-01-23T18:57:51.489483509Z" level=info msg="StartContainer for \"ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6\" returns successfully" Jan 23 18:57:51.506283 systemd[1]: cri-containerd-ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6.scope: Deactivated successfully. Jan 23 18:57:51.512874 containerd[1545]: time="2026-01-23T18:57:51.512755331Z" level=info msg="received container exit event container_id:\"ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6\" id:\"ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6\" pid:3513 exited_at:{seconds:1769194671 nanos:511997458}" Jan 23 18:57:51.553445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba3ac6a10075df4a4cf4ecb59ddd29be3b26ffc3d724fc9a885a1d9f5f6f73f6-rootfs.mount: Deactivated successfully. Jan 23 18:57:51.832307 kubelet[2810]: E0123 18:57:51.805062 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:57:51.947890 kubelet[2810]: I0123 18:57:51.947663 2810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:57:51.973202 kubelet[2810]: I0123 18:57:51.972966 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-679df78b8-5qbxp" podStartSLOduration=2.8534948829999998 podStartE2EDuration="4.972947567s" podCreationTimestamp="2026-01-23 18:57:47 +0000 UTC" firstStartedPulling="2026-01-23 18:57:48.201426041 +0000 UTC m=+25.638781396" lastFinishedPulling="2026-01-23 18:57:50.320878716 +0000 UTC m=+27.758234080" observedRunningTime="2026-01-23 18:57:50.99513684 +0000 UTC m=+28.432492210" watchObservedRunningTime="2026-01-23 18:57:51.972947567 +0000 UTC m=+29.410302936" Jan 23 18:57:52.954186 containerd[1545]: time="2026-01-23T18:57:52.953952197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:57:53.804678 kubelet[2810]: E0123 18:57:53.804571 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:57:55.804989 kubelet[2810]: E0123 18:57:55.804923 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:57:56.089206 containerd[1545]: time="2026-01-23T18:57:56.089130750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:56.090792 containerd[1545]: time="2026-01-23T18:57:56.090741845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 18:57:56.092613 containerd[1545]: time="2026-01-23T18:57:56.092528873Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:56.096754 containerd[1545]: time="2026-01-23T18:57:56.096278403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:56.097882 containerd[1545]: time="2026-01-23T18:57:56.097290320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.143278074s" Jan 23 18:57:56.097882 containerd[1545]: time="2026-01-23T18:57:56.097334443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:57:56.102375 containerd[1545]: time="2026-01-23T18:57:56.102314789Z" level=info msg="CreateContainer within sandbox \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:57:56.114454 containerd[1545]: time="2026-01-23T18:57:56.114399028Z" level=info msg="Container 4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:56.131289 containerd[1545]: time="2026-01-23T18:57:56.131229468Z" level=info msg="CreateContainer within sandbox \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927\"" Jan 23 18:57:56.132884 containerd[1545]: time="2026-01-23T18:57:56.132770887Z" level=info msg="StartContainer for \"4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927\"" Jan 23 18:57:56.137289 containerd[1545]: time="2026-01-23T18:57:56.137136896Z" level=info msg="connecting to shim 4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927" address="unix:///run/containerd/s/b71283d733386b9a17ce42a51abd06a2ccaa5b6fc55e9ee2a8ee4c3c394eb1fc" protocol=ttrpc version=3 Jan 23 18:57:56.174435 systemd[1]: Started cri-containerd-4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927.scope - libcontainer container 4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927. Jan 23 18:57:56.270365 containerd[1545]: time="2026-01-23T18:57:56.270319290Z" level=info msg="StartContainer for \"4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927\" returns successfully" Jan 23 18:57:57.268744 systemd[1]: cri-containerd-4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927.scope: Deactivated successfully. Jan 23 18:57:57.270019 systemd[1]: cri-containerd-4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927.scope: Consumed 666ms CPU time, 190.5M memory peak, 171.3M written to disk. Jan 23 18:57:57.271228 containerd[1545]: time="2026-01-23T18:57:57.271188888Z" level=info msg="received container exit event container_id:\"4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927\" id:\"4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927\" pid:3571 exited_at:{seconds:1769194677 nanos:270560913}" Jan 23 18:57:57.307749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dc94d8ac1caa21e6c18c09006aca44520bc4ad33e3083beae3a4c3cd18d1927-rootfs.mount: Deactivated successfully. Jan 23 18:57:57.322559 kubelet[2810]: I0123 18:57:57.322517 2810 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:57:57.587566 systemd[1]: Created slice kubepods-besteffort-podacd7ba83_69fb_473c_9a4a_e3ebf2caec6f.slice - libcontainer container kubepods-besteffort-podacd7ba83_69fb_473c_9a4a_e3ebf2caec6f.slice. Jan 23 18:57:57.609871 kubelet[2810]: I0123 18:57:57.609806 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-ca-bundle\") pod \"whisker-584c465578-dfq88\" (UID: \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\") " pod="calico-system/whisker-584c465578-dfq88" Jan 23 18:57:57.609871 kubelet[2810]: I0123 18:57:57.609875 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-backend-key-pair\") pod \"whisker-584c465578-dfq88\" (UID: \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\") " pod="calico-system/whisker-584c465578-dfq88" Jan 23 18:57:57.610233 kubelet[2810]: I0123 18:57:57.609913 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxq9\" (UniqueName: \"kubernetes.io/projected/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-kube-api-access-fcxq9\") pod \"whisker-584c465578-dfq88\" (UID: \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\") " pod="calico-system/whisker-584c465578-dfq88" Jan 23 18:57:57.811481 kubelet[2810]: I0123 18:57:57.811424 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c0d4e6-afb3-4eae-9319-0e865551ed12-config-volume\") pod \"coredns-674b8bbfcf-hfdpp\" (UID: \"45c0d4e6-afb3-4eae-9319-0e865551ed12\") " pod="kube-system/coredns-674b8bbfcf-hfdpp" Jan 23 18:57:57.811690 kubelet[2810]: I0123 18:57:57.811499 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2nl8\" (UniqueName: \"kubernetes.io/projected/45c0d4e6-afb3-4eae-9319-0e865551ed12-kube-api-access-t2nl8\") pod \"coredns-674b8bbfcf-hfdpp\" (UID: \"45c0d4e6-afb3-4eae-9319-0e865551ed12\") " pod="kube-system/coredns-674b8bbfcf-hfdpp" Jan 23 18:57:57.882105 systemd[1]: Created slice kubepods-burstable-pod45c0d4e6_afb3_4eae_9319_0e865551ed12.slice - libcontainer container kubepods-burstable-pod45c0d4e6_afb3_4eae_9319_0e865551ed12.slice. Jan 23 18:57:57.895193 containerd[1545]: time="2026-01-23T18:57:57.894527930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584c465578-dfq88,Uid:acd7ba83-69fb-473c-9a4a-e3ebf2caec6f,Namespace:calico-system,Attempt:0,}" Jan 23 18:57:57.913302 kubelet[2810]: I0123 18:57:57.913237 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jw6t\" (UniqueName: \"kubernetes.io/projected/995a2281-49c2-40bf-b075-9d751bff44f2-kube-api-access-9jw6t\") pod \"goldmane-666569f655-4z7p9\" (UID: \"995a2281-49c2-40bf-b075-9d751bff44f2\") " pod="calico-system/goldmane-666569f655-4z7p9" Jan 23 18:57:57.913302 kubelet[2810]: I0123 18:57:57.913295 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995a2281-49c2-40bf-b075-9d751bff44f2-config\") pod \"goldmane-666569f655-4z7p9\" (UID: \"995a2281-49c2-40bf-b075-9d751bff44f2\") " pod="calico-system/goldmane-666569f655-4z7p9" Jan 23 18:57:57.913545 kubelet[2810]: I0123 18:57:57.913338 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4kcg\" (UniqueName: \"kubernetes.io/projected/64f0782c-e663-4cd4-b3ff-935ab7f31baa-kube-api-access-z4kcg\") pod \"calico-kube-controllers-8f8898896-r4tmw\" (UID: \"64f0782c-e663-4cd4-b3ff-935ab7f31baa\") " pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" Jan 23 18:57:57.913545 kubelet[2810]: I0123 18:57:57.913364 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/995a2281-49c2-40bf-b075-9d751bff44f2-goldmane-ca-bundle\") pod \"goldmane-666569f655-4z7p9\" (UID: \"995a2281-49c2-40bf-b075-9d751bff44f2\") " pod="calico-system/goldmane-666569f655-4z7p9" Jan 23 18:57:57.913545 kubelet[2810]: I0123 18:57:57.913388 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/995a2281-49c2-40bf-b075-9d751bff44f2-goldmane-key-pair\") pod \"goldmane-666569f655-4z7p9\" (UID: \"995a2281-49c2-40bf-b075-9d751bff44f2\") " pod="calico-system/goldmane-666569f655-4z7p9" Jan 23 18:57:57.913545 kubelet[2810]: I0123 18:57:57.913422 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c889810-075e-4d9b-be67-f6461023fbaa-config-volume\") pod \"coredns-674b8bbfcf-fvd9n\" (UID: \"5c889810-075e-4d9b-be67-f6461023fbaa\") " pod="kube-system/coredns-674b8bbfcf-fvd9n" Jan 23 18:57:57.913545 kubelet[2810]: I0123 18:57:57.913447 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqhs\" (UniqueName: \"kubernetes.io/projected/8800bedc-6975-4cc8-8a9b-9da788a14188-kube-api-access-7qqhs\") pod \"calico-apiserver-587dd8bd56-9vqr7\" (UID: \"8800bedc-6975-4cc8-8a9b-9da788a14188\") " pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" Jan 23 18:57:57.914877 kubelet[2810]: I0123 18:57:57.913489 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64f0782c-e663-4cd4-b3ff-935ab7f31baa-tigera-ca-bundle\") pod \"calico-kube-controllers-8f8898896-r4tmw\" (UID: \"64f0782c-e663-4cd4-b3ff-935ab7f31baa\") " pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" Jan 23 18:57:57.914877 kubelet[2810]: I0123 18:57:57.914771 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8800bedc-6975-4cc8-8a9b-9da788a14188-calico-apiserver-certs\") pod \"calico-apiserver-587dd8bd56-9vqr7\" (UID: \"8800bedc-6975-4cc8-8a9b-9da788a14188\") " pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" Jan 23 18:57:57.914877 kubelet[2810]: I0123 18:57:57.914808 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgsf\" (UniqueName: \"kubernetes.io/projected/8b961e3b-935a-4759-813c-935dbe2acf0e-kube-api-access-wsgsf\") pod \"calico-apiserver-587dd8bd56-xf4xr\" (UID: \"8b961e3b-935a-4759-813c-935dbe2acf0e\") " pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" Jan 23 18:57:57.914877 kubelet[2810]: I0123 18:57:57.914849 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5ng\" (UniqueName: \"kubernetes.io/projected/5c889810-075e-4d9b-be67-f6461023fbaa-kube-api-access-ml5ng\") pod \"coredns-674b8bbfcf-fvd9n\" (UID: \"5c889810-075e-4d9b-be67-f6461023fbaa\") " pod="kube-system/coredns-674b8bbfcf-fvd9n" Jan 23 18:57:57.914877 kubelet[2810]: I0123 18:57:57.914877 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b961e3b-935a-4759-813c-935dbe2acf0e-calico-apiserver-certs\") pod \"calico-apiserver-587dd8bd56-xf4xr\" (UID: \"8b961e3b-935a-4759-813c-935dbe2acf0e\") " pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" Jan 23 18:57:57.923386 systemd[1]: Created slice kubepods-besteffort-pod1aa00049_b6aa_4c4a_9b9a_78530a9aeb40.slice - libcontainer container kubepods-besteffort-pod1aa00049_b6aa_4c4a_9b9a_78530a9aeb40.slice. Jan 23 18:57:57.940275 containerd[1545]: time="2026-01-23T18:57:57.939997165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5nws,Uid:1aa00049-b6aa-4c4a-9b9a-78530a9aeb40,Namespace:calico-system,Attempt:0,}" Jan 23 18:57:57.945049 systemd[1]: Created slice kubepods-besteffort-pod64f0782c_e663_4cd4_b3ff_935ab7f31baa.slice - libcontainer container kubepods-besteffort-pod64f0782c_e663_4cd4_b3ff_935ab7f31baa.slice. Jan 23 18:57:57.994855 systemd[1]: Created slice kubepods-besteffort-pod8b961e3b_935a_4759_813c_935dbe2acf0e.slice - libcontainer container kubepods-besteffort-pod8b961e3b_935a_4759_813c_935dbe2acf0e.slice. Jan 23 18:57:58.031235 systemd[1]: Created slice kubepods-burstable-pod5c889810_075e_4d9b_be67_f6461023fbaa.slice - libcontainer container kubepods-burstable-pod5c889810_075e_4d9b_be67_f6461023fbaa.slice. Jan 23 18:57:58.078201 systemd[1]: Created slice kubepods-besteffort-pod995a2281_49c2_40bf_b075_9d751bff44f2.slice - libcontainer container kubepods-besteffort-pod995a2281_49c2_40bf_b075_9d751bff44f2.slice. Jan 23 18:57:58.113543 containerd[1545]: time="2026-01-23T18:57:58.111904039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:57:58.125734 containerd[1545]: time="2026-01-23T18:57:58.122914834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4z7p9,Uid:995a2281-49c2-40bf-b075-9d751bff44f2,Namespace:calico-system,Attempt:0,}" Jan 23 18:57:58.126008 systemd[1]: Created slice kubepods-besteffort-pod8800bedc_6975_4cc8_8a9b_9da788a14188.slice - libcontainer container kubepods-besteffort-pod8800bedc_6975_4cc8_8a9b_9da788a14188.slice. Jan 23 18:57:58.147691 containerd[1545]: time="2026-01-23T18:57:58.147319121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-9vqr7,Uid:8800bedc-6975-4cc8-8a9b-9da788a14188,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:57:58.200966 containerd[1545]: time="2026-01-23T18:57:58.200838506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hfdpp,Uid:45c0d4e6-afb3-4eae-9319-0e865551ed12,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:58.214921 containerd[1545]: time="2026-01-23T18:57:58.214861919Z" level=error msg="Failed to destroy network for sandbox \"516b9b302a626593c0453ff9091f9738e99565668b277ee21f1530b2d4b5cd4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.219910 containerd[1545]: time="2026-01-23T18:57:58.219784546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584c465578-dfq88,Uid:acd7ba83-69fb-473c-9a4a-e3ebf2caec6f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"516b9b302a626593c0453ff9091f9738e99565668b277ee21f1530b2d4b5cd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.220887 kubelet[2810]: E0123 18:57:58.220831 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"516b9b302a626593c0453ff9091f9738e99565668b277ee21f1530b2d4b5cd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.221030 kubelet[2810]: E0123 18:57:58.220912 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"516b9b302a626593c0453ff9091f9738e99565668b277ee21f1530b2d4b5cd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-584c465578-dfq88" Jan 23 18:57:58.221030 kubelet[2810]: E0123 18:57:58.220947 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"516b9b302a626593c0453ff9091f9738e99565668b277ee21f1530b2d4b5cd4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-584c465578-dfq88" Jan 23 18:57:58.221136 kubelet[2810]: E0123 18:57:58.221020 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-584c465578-dfq88_calico-system(acd7ba83-69fb-473c-9a4a-e3ebf2caec6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-584c465578-dfq88_calico-system(acd7ba83-69fb-473c-9a4a-e3ebf2caec6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"516b9b302a626593c0453ff9091f9738e99565668b277ee21f1530b2d4b5cd4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-584c465578-dfq88" podUID="acd7ba83-69fb-473c-9a4a-e3ebf2caec6f" Jan 23 18:57:58.278845 containerd[1545]: time="2026-01-23T18:57:58.278759074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f8898896-r4tmw,Uid:64f0782c-e663-4cd4-b3ff-935ab7f31baa,Namespace:calico-system,Attempt:0,}" Jan 23 18:57:58.325742 containerd[1545]: time="2026-01-23T18:57:58.325665800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-xf4xr,Uid:8b961e3b-935a-4759-813c-935dbe2acf0e,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:57:58.340554 containerd[1545]: time="2026-01-23T18:57:58.340345617Z" level=error msg="Failed to destroy network for sandbox \"cb2e86992afa25ea5a142f93d4fe25f1f37e5e78e73a09b27b7ff300874e1bfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.345810 systemd[1]: run-netns-cni\x2d05ad7dc0\x2d82ea\x2df8f3\x2d0445\x2db59051e0c03c.mount: Deactivated successfully. Jan 23 18:57:58.358301 systemd[1]: run-netns-cni\x2d170f8333\x2d9516\x2d3738\x2d1a01\x2db129b23d7ced.mount: Deactivated successfully. Jan 23 18:57:58.370584 containerd[1545]: time="2026-01-23T18:57:58.370072422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5nws,Uid:1aa00049-b6aa-4c4a-9b9a-78530a9aeb40,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2e86992afa25ea5a142f93d4fe25f1f37e5e78e73a09b27b7ff300874e1bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.371545 kubelet[2810]: E0123 18:57:58.371483 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2e86992afa25ea5a142f93d4fe25f1f37e5e78e73a09b27b7ff300874e1bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.372019 containerd[1545]: time="2026-01-23T18:57:58.371550619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvd9n,Uid:5c889810-075e-4d9b-be67-f6461023fbaa,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:58.372496 kubelet[2810]: E0123 18:57:58.372250 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2e86992afa25ea5a142f93d4fe25f1f37e5e78e73a09b27b7ff300874e1bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:58.372496 kubelet[2810]: E0123 18:57:58.372312 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2e86992afa25ea5a142f93d4fe25f1f37e5e78e73a09b27b7ff300874e1bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g5nws" Jan 23 18:57:58.372496 kubelet[2810]: E0123 18:57:58.372442 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb2e86992afa25ea5a142f93d4fe25f1f37e5e78e73a09b27b7ff300874e1bfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:57:58.472117 containerd[1545]: time="2026-01-23T18:57:58.471864916Z" level=error msg="Failed to destroy network for sandbox \"676bd03d670137a0d80e99ec6770d1dc746ee0d23f651fd566728ecaa76a3bd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.480246 systemd[1]: run-netns-cni\x2dfc54fee0\x2d2e1e\x2d9b87\x2d0920\x2dc16a0ede33b6.mount: Deactivated successfully. Jan 23 18:57:58.486323 containerd[1545]: time="2026-01-23T18:57:58.486259100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4z7p9,Uid:995a2281-49c2-40bf-b075-9d751bff44f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bd03d670137a0d80e99ec6770d1dc746ee0d23f651fd566728ecaa76a3bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.486803 kubelet[2810]: E0123 18:57:58.486750 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bd03d670137a0d80e99ec6770d1dc746ee0d23f651fd566728ecaa76a3bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.486948 kubelet[2810]: E0123 18:57:58.486836 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bd03d670137a0d80e99ec6770d1dc746ee0d23f651fd566728ecaa76a3bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4z7p9" Jan 23 18:57:58.486948 kubelet[2810]: E0123 18:57:58.486868 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"676bd03d670137a0d80e99ec6770d1dc746ee0d23f651fd566728ecaa76a3bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4z7p9" Jan 23 18:57:58.487061 kubelet[2810]: E0123 18:57:58.486942 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4z7p9_calico-system(995a2281-49c2-40bf-b075-9d751bff44f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4z7p9_calico-system(995a2281-49c2-40bf-b075-9d751bff44f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"676bd03d670137a0d80e99ec6770d1dc746ee0d23f651fd566728ecaa76a3bd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:57:58.521739 containerd[1545]: time="2026-01-23T18:57:58.521684649Z" level=error msg="Failed to destroy network for sandbox \"07045ebd1ad2373a600ad77a0d84049bd7b781d217881930eb2badb247734173\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.527142 containerd[1545]: time="2026-01-23T18:57:58.525397014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-9vqr7,Uid:8800bedc-6975-4cc8-8a9b-9da788a14188,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07045ebd1ad2373a600ad77a0d84049bd7b781d217881930eb2badb247734173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.528951 systemd[1]: run-netns-cni\x2d1fcfc0b8\x2d7ace\x2d7391\x2d211c\x2dc48d68a24caf.mount: Deactivated successfully. Jan 23 18:57:58.533551 kubelet[2810]: E0123 18:57:58.533496 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07045ebd1ad2373a600ad77a0d84049bd7b781d217881930eb2badb247734173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.533680 kubelet[2810]: E0123 18:57:58.533580 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07045ebd1ad2373a600ad77a0d84049bd7b781d217881930eb2badb247734173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" Jan 23 18:57:58.533680 kubelet[2810]: E0123 18:57:58.533626 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07045ebd1ad2373a600ad77a0d84049bd7b781d217881930eb2badb247734173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" Jan 23 18:57:58.533801 kubelet[2810]: E0123 18:57:58.533714 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-587dd8bd56-9vqr7_calico-apiserver(8800bedc-6975-4cc8-8a9b-9da788a14188)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-587dd8bd56-9vqr7_calico-apiserver(8800bedc-6975-4cc8-8a9b-9da788a14188)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07045ebd1ad2373a600ad77a0d84049bd7b781d217881930eb2badb247734173\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:57:58.555742 containerd[1545]: time="2026-01-23T18:57:58.555603006Z" level=error msg="Failed to destroy network for sandbox \"ece16be1bdbf3fa2afc2be89ce95e967c04bbca52996efd2c044a7599bf6360d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.559949 containerd[1545]: time="2026-01-23T18:57:58.559879732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f8898896-r4tmw,Uid:64f0782c-e663-4cd4-b3ff-935ab7f31baa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece16be1bdbf3fa2afc2be89ce95e967c04bbca52996efd2c044a7599bf6360d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.560326 kubelet[2810]: E0123 18:57:58.560209 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece16be1bdbf3fa2afc2be89ce95e967c04bbca52996efd2c044a7599bf6360d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.560326 kubelet[2810]: E0123 18:57:58.560290 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece16be1bdbf3fa2afc2be89ce95e967c04bbca52996efd2c044a7599bf6360d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" Jan 23 18:57:58.560326 kubelet[2810]: E0123 18:57:58.560322 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece16be1bdbf3fa2afc2be89ce95e967c04bbca52996efd2c044a7599bf6360d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" Jan 23 18:57:58.560557 kubelet[2810]: E0123 18:57:58.560410 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8f8898896-r4tmw_calico-system(64f0782c-e663-4cd4-b3ff-935ab7f31baa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8f8898896-r4tmw_calico-system(64f0782c-e663-4cd4-b3ff-935ab7f31baa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ece16be1bdbf3fa2afc2be89ce95e967c04bbca52996efd2c044a7599bf6360d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:57:58.569584 containerd[1545]: time="2026-01-23T18:57:58.569528466Z" level=error msg="Failed to destroy network for sandbox \"956a49df86d7ad0970cfb55aa8322811f78195bf329d0dddd0c782b156747de4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.571984 containerd[1545]: time="2026-01-23T18:57:58.571894837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hfdpp,Uid:45c0d4e6-afb3-4eae-9319-0e865551ed12,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"956a49df86d7ad0970cfb55aa8322811f78195bf329d0dddd0c782b156747de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.572908 kubelet[2810]: E0123 18:57:58.572860 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956a49df86d7ad0970cfb55aa8322811f78195bf329d0dddd0c782b156747de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.573261 kubelet[2810]: E0123 18:57:58.572933 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956a49df86d7ad0970cfb55aa8322811f78195bf329d0dddd0c782b156747de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hfdpp" Jan 23 18:57:58.573261 kubelet[2810]: E0123 18:57:58.572965 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956a49df86d7ad0970cfb55aa8322811f78195bf329d0dddd0c782b156747de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hfdpp" Jan 23 18:57:58.573261 kubelet[2810]: E0123 18:57:58.573045 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hfdpp_kube-system(45c0d4e6-afb3-4eae-9319-0e865551ed12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hfdpp_kube-system(45c0d4e6-afb3-4eae-9319-0e865551ed12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"956a49df86d7ad0970cfb55aa8322811f78195bf329d0dddd0c782b156747de4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hfdpp" podUID="45c0d4e6-afb3-4eae-9319-0e865551ed12" Jan 23 18:57:58.603902 containerd[1545]: time="2026-01-23T18:57:58.603841268Z" level=error msg="Failed to destroy network for sandbox \"6264f0075f86ee395ad7cbd036727a2835b1c39d315ed9f164a2a1aa1229cc48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.604243 containerd[1545]: time="2026-01-23T18:57:58.603926604Z" level=error msg="Failed to destroy network for sandbox \"19db1cd950e8a911c0c209248b57280819f095851c9827e78813011e5ed65f24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.606089 containerd[1545]: time="2026-01-23T18:57:58.605948167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-xf4xr,Uid:8b961e3b-935a-4759-813c-935dbe2acf0e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6264f0075f86ee395ad7cbd036727a2835b1c39d315ed9f164a2a1aa1229cc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.606458 kubelet[2810]: E0123 18:57:58.606394 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6264f0075f86ee395ad7cbd036727a2835b1c39d315ed9f164a2a1aa1229cc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.606884 kubelet[2810]: E0123 18:57:58.606495 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6264f0075f86ee395ad7cbd036727a2835b1c39d315ed9f164a2a1aa1229cc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" Jan 23 18:57:58.606884 kubelet[2810]: E0123 18:57:58.606540 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6264f0075f86ee395ad7cbd036727a2835b1c39d315ed9f164a2a1aa1229cc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" Jan 23 18:57:58.606884 kubelet[2810]: E0123 18:57:58.606622 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-587dd8bd56-xf4xr_calico-apiserver(8b961e3b-935a-4759-813c-935dbe2acf0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-587dd8bd56-xf4xr_calico-apiserver(8b961e3b-935a-4759-813c-935dbe2acf0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6264f0075f86ee395ad7cbd036727a2835b1c39d315ed9f164a2a1aa1229cc48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:57:58.607690 containerd[1545]: time="2026-01-23T18:57:58.607627279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvd9n,Uid:5c889810-075e-4d9b-be67-f6461023fbaa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19db1cd950e8a911c0c209248b57280819f095851c9827e78813011e5ed65f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.608188 kubelet[2810]: E0123 18:57:58.608136 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19db1cd950e8a911c0c209248b57280819f095851c9827e78813011e5ed65f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:57:58.608475 kubelet[2810]: E0123 18:57:58.608323 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19db1cd950e8a911c0c209248b57280819f095851c9827e78813011e5ed65f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fvd9n" Jan 23 18:57:58.608475 kubelet[2810]: E0123 18:57:58.608366 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19db1cd950e8a911c0c209248b57280819f095851c9827e78813011e5ed65f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fvd9n" Jan 23 18:57:58.609246 kubelet[2810]: E0123 18:57:58.608427 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fvd9n_kube-system(5c889810-075e-4d9b-be67-f6461023fbaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fvd9n_kube-system(5c889810-075e-4d9b-be67-f6461023fbaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19db1cd950e8a911c0c209248b57280819f095851c9827e78813011e5ed65f24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fvd9n" podUID="5c889810-075e-4d9b-be67-f6461023fbaa" Jan 23 18:57:59.307727 systemd[1]: run-netns-cni\x2d8e1bed16\x2df53d\x2d47ac\x2dc296\x2d687dfd1f3f3f.mount: Deactivated successfully. Jan 23 18:57:59.310072 systemd[1]: run-netns-cni\x2d1dd776c0\x2d7200\x2dd971\x2d3949\x2d887085d2d35e.mount: Deactivated successfully. Jan 23 18:57:59.310218 systemd[1]: run-netns-cni\x2db628ccc1\x2d79f6\x2dff92\x2dfc77\x2d2290f3553af0.mount: Deactivated successfully. Jan 23 18:57:59.310329 systemd[1]: run-netns-cni\x2df74ccb1c\x2d505a\x2d6ee8\x2d7d48\x2dc11b6a14ceec.mount: Deactivated successfully. Jan 23 18:58:04.711432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832815464.mount: Deactivated successfully. Jan 23 18:58:04.743095 containerd[1545]: time="2026-01-23T18:58:04.742609292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.744442 containerd[1545]: time="2026-01-23T18:58:04.744376482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 18:58:04.745876 containerd[1545]: time="2026-01-23T18:58:04.745805909Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.748598 containerd[1545]: time="2026-01-23T18:58:04.748533653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.749513 containerd[1545]: time="2026-01-23T18:58:04.749334380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.637372952s" Jan 23 18:58:04.749513 containerd[1545]: time="2026-01-23T18:58:04.749382810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:58:04.779152 containerd[1545]: time="2026-01-23T18:58:04.779098857Z" level=info msg="CreateContainer within sandbox \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:58:04.796201 containerd[1545]: time="2026-01-23T18:58:04.793358503Z" level=info msg="Container 37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:04.808734 containerd[1545]: time="2026-01-23T18:58:04.808680948Z" level=info msg="CreateContainer within sandbox \"775fa643d5f7cfa82bf74c478c6e629c47a1ecd3bea095bd8025361718d732d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a\"" Jan 23 18:58:04.809763 containerd[1545]: time="2026-01-23T18:58:04.809714978Z" level=info msg="StartContainer for \"37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a\"" Jan 23 18:58:04.812340 containerd[1545]: time="2026-01-23T18:58:04.812290283Z" level=info msg="connecting to shim 37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a" address="unix:///run/containerd/s/b71283d733386b9a17ce42a51abd06a2ccaa5b6fc55e9ee2a8ee4c3c394eb1fc" protocol=ttrpc version=3 Jan 23 18:58:04.842397 systemd[1]: Started cri-containerd-37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a.scope - libcontainer container 37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a. Jan 23 18:58:04.934906 containerd[1545]: time="2026-01-23T18:58:04.934837551Z" level=info msg="StartContainer for \"37281ea0d09fccaf0ccc685b6862f35a48fafddcbe5181c39efe33042aeeb82a\" returns successfully" Jan 23 18:58:05.061225 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:58:05.061422 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:58:05.158853 kubelet[2810]: I0123 18:58:05.158150 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mb4kn" podStartSLOduration=1.8961226770000001 podStartE2EDuration="18.158125152s" podCreationTimestamp="2026-01-23 18:57:47 +0000 UTC" firstStartedPulling="2026-01-23 18:57:48.488632719 +0000 UTC m=+25.925988072" lastFinishedPulling="2026-01-23 18:58:04.750635195 +0000 UTC m=+42.187990547" observedRunningTime="2026-01-23 18:58:05.157503233 +0000 UTC m=+42.594858612" watchObservedRunningTime="2026-01-23 18:58:05.158125152 +0000 UTC m=+42.595480497" Jan 23 18:58:05.374657 kubelet[2810]: I0123 18:58:05.374468 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-ca-bundle\") pod \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\" (UID: \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\") " Jan 23 18:58:05.375465 kubelet[2810]: I0123 18:58:05.375342 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxq9\" (UniqueName: \"kubernetes.io/projected/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-kube-api-access-fcxq9\") pod \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\" (UID: \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\") " Jan 23 18:58:05.375692 kubelet[2810]: I0123 18:58:05.375569 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-backend-key-pair\") pod \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\" (UID: \"acd7ba83-69fb-473c-9a4a-e3ebf2caec6f\") " Jan 23 18:58:05.375692 kubelet[2810]: I0123 18:58:05.375659 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "acd7ba83-69fb-473c-9a4a-e3ebf2caec6f" (UID: "acd7ba83-69fb-473c-9a4a-e3ebf2caec6f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:58:05.376299 kubelet[2810]: I0123 18:58:05.375924 2810 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-ca-bundle\") on node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" DevicePath \"\"" Jan 23 18:58:05.383507 kubelet[2810]: I0123 18:58:05.383433 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-kube-api-access-fcxq9" (OuterVolumeSpecName: "kube-api-access-fcxq9") pod "acd7ba83-69fb-473c-9a4a-e3ebf2caec6f" (UID: "acd7ba83-69fb-473c-9a4a-e3ebf2caec6f"). InnerVolumeSpecName "kube-api-access-fcxq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:58:05.384585 kubelet[2810]: I0123 18:58:05.384546 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "acd7ba83-69fb-473c-9a4a-e3ebf2caec6f" (UID: "acd7ba83-69fb-473c-9a4a-e3ebf2caec6f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:58:05.476931 kubelet[2810]: I0123 18:58:05.476768 2810 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-whisker-backend-key-pair\") on node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" DevicePath \"\"" Jan 23 18:58:05.476931 kubelet[2810]: I0123 18:58:05.476817 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcxq9\" (UniqueName: \"kubernetes.io/projected/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f-kube-api-access-fcxq9\") on node \"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal\" DevicePath \"\"" Jan 23 18:58:05.709496 systemd[1]: var-lib-kubelet-pods-acd7ba83\x2d69fb\x2d473c\x2d9a4a\x2de3ebf2caec6f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfcxq9.mount: Deactivated successfully. Jan 23 18:58:05.709656 systemd[1]: var-lib-kubelet-pods-acd7ba83\x2d69fb\x2d473c\x2d9a4a\x2de3ebf2caec6f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:58:06.146233 systemd[1]: Removed slice kubepods-besteffort-podacd7ba83_69fb_473c_9a4a_e3ebf2caec6f.slice - libcontainer container kubepods-besteffort-podacd7ba83_69fb_473c_9a4a_e3ebf2caec6f.slice. Jan 23 18:58:06.244810 systemd[1]: Created slice kubepods-besteffort-pode349b807_19f1_4df8_a846_f2bc79a618bc.slice - libcontainer container kubepods-besteffort-pode349b807_19f1_4df8_a846_f2bc79a618bc.slice. Jan 23 18:58:06.284463 kubelet[2810]: I0123 18:58:06.284403 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcfx\" (UniqueName: \"kubernetes.io/projected/e349b807-19f1-4df8-a846-f2bc79a618bc-kube-api-access-xqcfx\") pod \"whisker-6dbdb8cb8d-x4l8g\" (UID: \"e349b807-19f1-4df8-a846-f2bc79a618bc\") " pod="calico-system/whisker-6dbdb8cb8d-x4l8g" Jan 23 18:58:06.284463 kubelet[2810]: I0123 18:58:06.284474 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e349b807-19f1-4df8-a846-f2bc79a618bc-whisker-ca-bundle\") pod \"whisker-6dbdb8cb8d-x4l8g\" (UID: \"e349b807-19f1-4df8-a846-f2bc79a618bc\") " pod="calico-system/whisker-6dbdb8cb8d-x4l8g" Jan 23 18:58:06.285147 kubelet[2810]: I0123 18:58:06.284508 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e349b807-19f1-4df8-a846-f2bc79a618bc-whisker-backend-key-pair\") pod \"whisker-6dbdb8cb8d-x4l8g\" (UID: \"e349b807-19f1-4df8-a846-f2bc79a618bc\") " pod="calico-system/whisker-6dbdb8cb8d-x4l8g" Jan 23 18:58:06.551445 containerd[1545]: time="2026-01-23T18:58:06.551292314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbdb8cb8d-x4l8g,Uid:e349b807-19f1-4df8-a846-f2bc79a618bc,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:06.782958 systemd-networkd[1420]: cali4c17302770d: Link UP Jan 23 18:58:06.785433 systemd-networkd[1420]: cali4c17302770d: Gained carrier Jan 23 18:58:06.815894 containerd[1545]: 2026-01-23 18:58:06.617 [INFO][4023] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:06.815894 containerd[1545]: 2026-01-23 18:58:06.643 [INFO][4023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0 whisker-6dbdb8cb8d- calico-system e349b807-19f1-4df8-a846-f2bc79a618bc 921 0 2026-01-23 18:58:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dbdb8cb8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal whisker-6dbdb8cb8d-x4l8g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4c17302770d [] [] }} ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-" Jan 23 18:58:06.815894 containerd[1545]: 2026-01-23 18:58:06.644 [INFO][4023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.815894 containerd[1545]: 2026-01-23 18:58:06.710 [INFO][4066] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" HandleID="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.711 [INFO][4066] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" HandleID="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"whisker-6dbdb8cb8d-x4l8g", "timestamp":"2026-01-23 18:58:06.710838562 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.712 [INFO][4066] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.712 [INFO][4066] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.712 [INFO][4066] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.726 [INFO][4066] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.732 [INFO][4066] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.738 [INFO][4066] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816326 containerd[1545]: 2026-01-23 18:58:06.741 [INFO][4066] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.744 [INFO][4066] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.744 [INFO][4066] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.746 [INFO][4066] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7 Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.752 [INFO][4066] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.758 [INFO][4066] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.193/26] block=192.168.88.192/26 handle="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.758 [INFO][4066] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.193/26] handle="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.759 [INFO][4066] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:06.816710 containerd[1545]: 2026-01-23 18:58:06.759 [INFO][4066] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.193/26] IPv6=[] ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" HandleID="k8s-pod-network.fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.817092 containerd[1545]: 2026-01-23 18:58:06.764 [INFO][4023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0", GenerateName:"whisker-6dbdb8cb8d-", Namespace:"calico-system", SelfLink:"", UID:"e349b807-19f1-4df8-a846-f2bc79a618bc", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dbdb8cb8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-6dbdb8cb8d-x4l8g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c17302770d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:06.818423 containerd[1545]: 2026-01-23 18:58:06.764 [INFO][4023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.193/32] ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.818423 containerd[1545]: 2026-01-23 18:58:06.764 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c17302770d ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.818423 containerd[1545]: 2026-01-23 18:58:06.783 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.818744 containerd[1545]: 2026-01-23 18:58:06.785 [INFO][4023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0", GenerateName:"whisker-6dbdb8cb8d-", Namespace:"calico-system", SelfLink:"", UID:"e349b807-19f1-4df8-a846-f2bc79a618bc", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dbdb8cb8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7", Pod:"whisker-6dbdb8cb8d-x4l8g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c17302770d", MAC:"7e:b4:22:0d:b7:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:06.819463 containerd[1545]: 2026-01-23 18:58:06.800 [INFO][4023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" Namespace="calico-system" Pod="whisker-6dbdb8cb8d-x4l8g" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-whisker--6dbdb8cb8d--x4l8g-eth0" Jan 23 18:58:06.826624 kubelet[2810]: I0123 18:58:06.826562 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd7ba83-69fb-473c-9a4a-e3ebf2caec6f" path="/var/lib/kubelet/pods/acd7ba83-69fb-473c-9a4a-e3ebf2caec6f/volumes" Jan 23 18:58:06.885922 containerd[1545]: time="2026-01-23T18:58:06.885627232Z" level=info msg="connecting to shim fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7" address="unix:///run/containerd/s/76aac1e9a1a433ba49f6a55968596a2e0b44d510960417b3d117c729fc249bc2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:06.968436 systemd[1]: Started cri-containerd-fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7.scope - libcontainer container fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7. Jan 23 18:58:07.107337 containerd[1545]: time="2026-01-23T18:58:07.106595857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbdb8cb8d-x4l8g,Uid:e349b807-19f1-4df8-a846-f2bc79a618bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd3052cf0a278fbcc61164cb4868debd7164d9bbbef4d7950bc8ac22940b24c7\"" Jan 23 18:58:07.112669 containerd[1545]: time="2026-01-23T18:58:07.112515441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:07.288381 containerd[1545]: time="2026-01-23T18:58:07.288309793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:07.289783 containerd[1545]: time="2026-01-23T18:58:07.289706620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:07.289783 containerd[1545]: time="2026-01-23T18:58:07.289751354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:58:07.290214 kubelet[2810]: E0123 18:58:07.290136 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:07.290925 kubelet[2810]: E0123 18:58:07.290235 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:07.290973 kubelet[2810]: E0123 18:58:07.290441 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf6a28650de424abc477daf1038e0ae,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqcfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dbdb8cb8d-x4l8g_calico-system(e349b807-19f1-4df8-a846-f2bc79a618bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:07.294152 containerd[1545]: time="2026-01-23T18:58:07.294105508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:07.452439 containerd[1545]: time="2026-01-23T18:58:07.452269413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:07.453974 containerd[1545]: time="2026-01-23T18:58:07.453917971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:07.454100 containerd[1545]: time="2026-01-23T18:58:07.454045277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:07.454415 kubelet[2810]: E0123 18:58:07.454341 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:07.454594 kubelet[2810]: E0123 18:58:07.454426 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:07.454795 kubelet[2810]: E0123 18:58:07.454680 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqcfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dbdb8cb8d-x4l8g_calico-system(e349b807-19f1-4df8-a846-f2bc79a618bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:07.456035 kubelet[2810]: E0123 18:58:07.455975 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:58:08.153274 kubelet[2810]: E0123 18:58:08.153193 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:58:08.397595 systemd-networkd[1420]: cali4c17302770d: Gained IPv6LL Jan 23 18:58:08.809945 containerd[1545]: time="2026-01-23T18:58:08.809891594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5nws,Uid:1aa00049-b6aa-4c4a-9b9a-78530a9aeb40,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:08.949363 systemd-networkd[1420]: cali019f81d3172: Link UP Jan 23 18:58:08.950554 systemd-networkd[1420]: cali019f81d3172: Gained carrier Jan 23 18:58:08.975441 containerd[1545]: 2026-01-23 18:58:08.846 [INFO][4185] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:08.975441 containerd[1545]: 2026-01-23 18:58:08.864 [INFO][4185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0 csi-node-driver- calico-system 1aa00049-b6aa-4c4a-9b9a-78530a9aeb40 755 0 2026-01-23 18:57:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal csi-node-driver-g5nws eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali019f81d3172 [] [] }} ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-" Jan 23 18:58:08.975441 containerd[1545]: 2026-01-23 18:58:08.864 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:08.975441 containerd[1545]: 2026-01-23 18:58:08.900 [INFO][4196] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" HandleID="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.901 [INFO][4196] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" HandleID="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"csi-node-driver-g5nws", "timestamp":"2026-01-23 18:58:08.900958352 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.901 [INFO][4196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.901 [INFO][4196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.901 [INFO][4196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.910 [INFO][4196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.916 [INFO][4196] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.921 [INFO][4196] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.975840 containerd[1545]: 2026-01-23 18:58:08.923 [INFO][4196] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.926 [INFO][4196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.926 [INFO][4196] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.928 [INFO][4196] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.933 [INFO][4196] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.940 [INFO][4196] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.194/26] block=192.168.88.192/26 handle="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.940 [INFO][4196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.194/26] handle="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.940 [INFO][4196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:08.976404 containerd[1545]: 2026-01-23 18:58:08.940 [INFO][4196] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.194/26] IPv6=[] ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" HandleID="k8s-pod-network.5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:08.976846 containerd[1545]: 2026-01-23 18:58:08.945 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-g5nws", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali019f81d3172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:08.976979 containerd[1545]: 2026-01-23 18:58:08.945 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.194/32] ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:08.976979 containerd[1545]: 2026-01-23 18:58:08.945 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali019f81d3172 ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:08.976979 containerd[1545]: 2026-01-23 18:58:08.951 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:08.977129 containerd[1545]: 2026-01-23 18:58:08.952 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aa00049-b6aa-4c4a-9b9a-78530a9aeb40", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab", Pod:"csi-node-driver-g5nws", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali019f81d3172", MAC:"d2:ca:c6:41:94:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:08.977258 containerd[1545]: 2026-01-23 18:58:08.970 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" Namespace="calico-system" Pod="csi-node-driver-g5nws" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-csi--node--driver--g5nws-eth0" Jan 23 18:58:09.009767 containerd[1545]: time="2026-01-23T18:58:09.009684613Z" level=info msg="connecting to shim 5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab" address="unix:///run/containerd/s/dd5cb866866a79f132373ddef26844418511b8b88b63d19aefbfa9c7967ae195" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:09.045371 systemd[1]: Started cri-containerd-5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab.scope - libcontainer container 5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab. Jan 23 18:58:09.086977 containerd[1545]: time="2026-01-23T18:58:09.086931540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5nws,Uid:1aa00049-b6aa-4c4a-9b9a-78530a9aeb40,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d8762ae9781608fb8ebc63bfeccf600478548e7a376ca35d4f396261939ddab\"" Jan 23 18:58:09.089819 containerd[1545]: time="2026-01-23T18:58:09.089598871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:09.272928 containerd[1545]: time="2026-01-23T18:58:09.272866266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:09.274561 containerd[1545]: time="2026-01-23T18:58:09.274384491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:09.274561 containerd[1545]: time="2026-01-23T18:58:09.274509941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:58:09.274819 kubelet[2810]: E0123 18:58:09.274730 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:09.274819 kubelet[2810]: E0123 18:58:09.274791 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:09.275381 kubelet[2810]: E0123 18:58:09.274972 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrwww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:09.279655 containerd[1545]: time="2026-01-23T18:58:09.279622809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:09.443572 containerd[1545]: time="2026-01-23T18:58:09.443406576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:09.445279 containerd[1545]: time="2026-01-23T18:58:09.445221788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:09.445394 containerd[1545]: time="2026-01-23T18:58:09.445370594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:58:09.445902 kubelet[2810]: E0123 18:58:09.445797 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:09.445902 kubelet[2810]: E0123 18:58:09.445875 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:09.446435 kubelet[2810]: E0123 18:58:09.446085 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrwww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:09.447386 kubelet[2810]: E0123 18:58:09.447332 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:58:09.806709 containerd[1545]: time="2026-01-23T18:58:09.805716303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvd9n,Uid:5c889810-075e-4d9b-be67-f6461023fbaa,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:09.949859 systemd-networkd[1420]: cali2ad63d66586: Link UP Jan 23 18:58:09.950809 systemd-networkd[1420]: cali2ad63d66586: Gained carrier Jan 23 18:58:09.972271 containerd[1545]: 2026-01-23 18:58:09.843 [INFO][4275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:09.972271 containerd[1545]: 2026-01-23 18:58:09.860 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0 coredns-674b8bbfcf- kube-system 5c889810-075e-4d9b-be67-f6461023fbaa 853 0 2026-01-23 18:57:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal coredns-674b8bbfcf-fvd9n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ad63d66586 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-" Jan 23 18:58:09.972271 containerd[1545]: 2026-01-23 18:58:09.860 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:09.972271 containerd[1545]: 2026-01-23 18:58:09.897 [INFO][4286] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" HandleID="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.898 [INFO][4286] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" HandleID="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-fvd9n", "timestamp":"2026-01-23 18:58:09.897929594 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.898 [INFO][4286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.898 [INFO][4286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.898 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.908 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.914 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.921 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.973750 containerd[1545]: 2026-01-23 18:58:09.924 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.927 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.927 [INFO][4286] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.929 [INFO][4286] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33 Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.935 [INFO][4286] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.942 [INFO][4286] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.195/26] block=192.168.88.192/26 handle="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.942 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.195/26] handle="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.942 [INFO][4286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:09.974731 containerd[1545]: 2026-01-23 18:58:09.942 [INFO][4286] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.195/26] IPv6=[] ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" HandleID="k8s-pod-network.0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:09.975471 containerd[1545]: 2026-01-23 18:58:09.945 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5c889810-075e-4d9b-be67-f6461023fbaa", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-fvd9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ad63d66586", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:09.975471 containerd[1545]: 2026-01-23 18:58:09.945 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.195/32] ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:09.975471 containerd[1545]: 2026-01-23 18:58:09.946 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ad63d66586 ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:09.975471 containerd[1545]: 2026-01-23 18:58:09.951 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:09.975471 containerd[1545]: 2026-01-23 18:58:09.951 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5c889810-075e-4d9b-be67-f6461023fbaa", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33", Pod:"coredns-674b8bbfcf-fvd9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ad63d66586", MAC:"ce:37:e8:26:6c:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:09.975471 containerd[1545]: 2026-01-23 18:58:09.967 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvd9n" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--fvd9n-eth0" Jan 23 18:58:10.009225 containerd[1545]: time="2026-01-23T18:58:10.009109853Z" level=info msg="connecting to shim 0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33" address="unix:///run/containerd/s/332189713613ca7f44f0bf048ee7a313d367257d8b5af434ada6df07131c4f34" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:10.054458 systemd[1]: Started cri-containerd-0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33.scope - libcontainer container 0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33. Jan 23 18:58:10.124304 containerd[1545]: time="2026-01-23T18:58:10.124242993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvd9n,Uid:5c889810-075e-4d9b-be67-f6461023fbaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33\"" Jan 23 18:58:10.131515 containerd[1545]: time="2026-01-23T18:58:10.131467838Z" level=info msg="CreateContainer within sandbox \"0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:58:10.154552 containerd[1545]: time="2026-01-23T18:58:10.153401094Z" level=info msg="Container 012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:10.165991 kubelet[2810]: E0123 18:58:10.165691 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:58:10.168857 containerd[1545]: time="2026-01-23T18:58:10.168809930Z" level=info msg="CreateContainer within sandbox \"0ebb92f73f9563c9381f9836dd339f8cc5f769f2e37850e3076d622096ffdc33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e\"" Jan 23 18:58:10.169781 containerd[1545]: time="2026-01-23T18:58:10.169364918Z" level=info msg="StartContainer for \"012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e\"" Jan 23 18:58:10.173745 containerd[1545]: time="2026-01-23T18:58:10.173002703Z" level=info msg="connecting to shim 012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e" address="unix:///run/containerd/s/332189713613ca7f44f0bf048ee7a313d367257d8b5af434ada6df07131c4f34" protocol=ttrpc version=3 Jan 23 18:58:10.214437 systemd[1]: Started cri-containerd-012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e.scope - libcontainer container 012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e. Jan 23 18:58:10.270059 containerd[1545]: time="2026-01-23T18:58:10.270005275Z" level=info msg="StartContainer for \"012a6fa518ac4473ed3f75198ae28bcb3adf2f095538c1bbcdf29a42665c0e9e\" returns successfully" Jan 23 18:58:10.701549 systemd-networkd[1420]: cali019f81d3172: Gained IPv6LL Jan 23 18:58:10.808374 containerd[1545]: time="2026-01-23T18:58:10.808323737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-xf4xr,Uid:8b961e3b-935a-4759-813c-935dbe2acf0e,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:10.943800 systemd-networkd[1420]: cali8cb0b7bc300: Link UP Jan 23 18:58:10.944125 systemd-networkd[1420]: cali8cb0b7bc300: Gained carrier Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.840 [INFO][4400] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.855 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0 calico-apiserver-587dd8bd56- calico-apiserver 8b961e3b-935a-4759-813c-935dbe2acf0e 851 0 2026-01-23 18:57:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:587dd8bd56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal calico-apiserver-587dd8bd56-xf4xr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cb0b7bc300 [] [] }} ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.855 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.893 [INFO][4412] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" HandleID="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.893 [INFO][4412] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" HandleID="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"calico-apiserver-587dd8bd56-xf4xr", "timestamp":"2026-01-23 18:58:10.893752018 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.894 [INFO][4412] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.894 [INFO][4412] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.894 [INFO][4412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.905 [INFO][4412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.911 [INFO][4412] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.916 [INFO][4412] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.919 [INFO][4412] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.922 [INFO][4412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.922 [INFO][4412] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.924 [INFO][4412] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.929 [INFO][4412] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.938 [INFO][4412] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.196/26] block=192.168.88.192/26 handle="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.938 [INFO][4412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.196/26] handle="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.938 [INFO][4412] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:10.963963 containerd[1545]: 2026-01-23 18:58:10.938 [INFO][4412] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.196/26] IPv6=[] ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" HandleID="k8s-pod-network.50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:10.966475 containerd[1545]: 2026-01-23 18:58:10.940 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0", GenerateName:"calico-apiserver-587dd8bd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b961e3b-935a-4759-813c-935dbe2acf0e", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587dd8bd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-587dd8bd56-xf4xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb0b7bc300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:10.966475 containerd[1545]: 2026-01-23 18:58:10.940 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.196/32] ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:10.966475 containerd[1545]: 2026-01-23 18:58:10.940 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cb0b7bc300 ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:10.966475 containerd[1545]: 2026-01-23 18:58:10.943 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:10.966475 containerd[1545]: 2026-01-23 18:58:10.944 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0", GenerateName:"calico-apiserver-587dd8bd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b961e3b-935a-4759-813c-935dbe2acf0e", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587dd8bd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a", Pod:"calico-apiserver-587dd8bd56-xf4xr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb0b7bc300", MAC:"aa:4e:46:ca:f0:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:10.966475 containerd[1545]: 2026-01-23 18:58:10.959 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-xf4xr" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--xf4xr-eth0" Jan 23 18:58:11.004077 containerd[1545]: time="2026-01-23T18:58:11.003547878Z" level=info msg="connecting to shim 50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a" address="unix:///run/containerd/s/4625cbd68618491d2ac7d55c3ecc84a55b22d9db933b1ba872cb02bf949561c2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:11.050483 systemd[1]: Started cri-containerd-50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a.scope - libcontainer container 50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a. Jan 23 18:58:11.118710 containerd[1545]: time="2026-01-23T18:58:11.118514585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-xf4xr,Uid:8b961e3b-935a-4759-813c-935dbe2acf0e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"50f1d8e158fb6fac0521bae930bb1eb5c376816deef6a44d4b1d4bf1a3dbd22a\"" Jan 23 18:58:11.120497 containerd[1545]: time="2026-01-23T18:58:11.120457746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:11.288957 containerd[1545]: time="2026-01-23T18:58:11.288461891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:11.290446 containerd[1545]: time="2026-01-23T18:58:11.290320356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:11.290584 containerd[1545]: time="2026-01-23T18:58:11.290544637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:11.290855 kubelet[2810]: E0123 18:58:11.290805 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:11.291415 kubelet[2810]: E0123 18:58:11.290871 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:11.291415 kubelet[2810]: E0123 18:58:11.291105 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587dd8bd56-xf4xr_calico-apiserver(8b961e3b-935a-4759-813c-935dbe2acf0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:11.292697 kubelet[2810]: E0123 18:58:11.292625 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:58:11.806708 containerd[1545]: time="2026-01-23T18:58:11.806636183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hfdpp,Uid:45c0d4e6-afb3-4eae-9319-0e865551ed12,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:11.807296 containerd[1545]: time="2026-01-23T18:58:11.806643253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-9vqr7,Uid:8800bedc-6975-4cc8-8a9b-9da788a14188,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:11.853684 systemd-networkd[1420]: cali2ad63d66586: Gained IPv6LL Jan 23 18:58:12.046358 systemd-networkd[1420]: cali8cb0b7bc300: Gained IPv6LL Jan 23 18:58:12.062409 systemd-networkd[1420]: caliaae3c72e2eb: Link UP Jan 23 18:58:12.063684 systemd-networkd[1420]: caliaae3c72e2eb: Gained carrier Jan 23 18:58:12.085928 kubelet[2810]: I0123 18:58:12.084411 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fvd9n" podStartSLOduration=42.084382064 podStartE2EDuration="42.084382064s" podCreationTimestamp="2026-01-23 18:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:58:11.18350565 +0000 UTC m=+48.620861020" watchObservedRunningTime="2026-01-23 18:58:12.084382064 +0000 UTC m=+49.521737440" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.898 [INFO][4496] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.930 [INFO][4496] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0 calico-apiserver-587dd8bd56- calico-apiserver 8800bedc-6975-4cc8-8a9b-9da788a14188 852 0 2026-01-23 18:57:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:587dd8bd56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal calico-apiserver-587dd8bd56-9vqr7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaae3c72e2eb [] [] }} ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.930 [INFO][4496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.994 [INFO][4520] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" HandleID="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.995 [INFO][4520] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" HandleID="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"calico-apiserver-587dd8bd56-9vqr7", "timestamp":"2026-01-23 18:58:11.994868602 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.995 [INFO][4520] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.995 [INFO][4520] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:11.995 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.007 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.020 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.027 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.030 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.034 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.034 [INFO][4520] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.036 [INFO][4520] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.041 [INFO][4520] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.049 [INFO][4520] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.197/26] block=192.168.88.192/26 handle="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.049 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.197/26] handle="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.049 [INFO][4520] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:12.086583 containerd[1545]: 2026-01-23 18:58:12.049 [INFO][4520] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.197/26] IPv6=[] ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" HandleID="k8s-pod-network.60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.089066 containerd[1545]: 2026-01-23 18:58:12.053 [INFO][4496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0", GenerateName:"calico-apiserver-587dd8bd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"8800bedc-6975-4cc8-8a9b-9da788a14188", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587dd8bd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-587dd8bd56-9vqr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaae3c72e2eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:12.089066 containerd[1545]: 2026-01-23 18:58:12.053 [INFO][4496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.197/32] ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.089066 containerd[1545]: 2026-01-23 18:58:12.053 [INFO][4496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaae3c72e2eb ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.089066 containerd[1545]: 2026-01-23 18:58:12.064 [INFO][4496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.089066 containerd[1545]: 2026-01-23 18:58:12.066 [INFO][4496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0", GenerateName:"calico-apiserver-587dd8bd56-", Namespace:"calico-apiserver", SelfLink:"", UID:"8800bedc-6975-4cc8-8a9b-9da788a14188", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587dd8bd56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac", Pod:"calico-apiserver-587dd8bd56-9vqr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaae3c72e2eb", MAC:"1a:1a:33:07:57:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:12.089066 containerd[1545]: 2026-01-23 18:58:12.080 [INFO][4496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" Namespace="calico-apiserver" Pod="calico-apiserver-587dd8bd56-9vqr7" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--apiserver--587dd8bd56--9vqr7-eth0" Jan 23 18:58:12.118667 containerd[1545]: time="2026-01-23T18:58:12.118608744Z" level=info msg="connecting to shim 60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac" address="unix:///run/containerd/s/842a8c8d2d6343dbfc4db1410d10361c038f4d9f283dd65cee7228bdb79b8020" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:12.169649 systemd[1]: Started cri-containerd-60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac.scope - libcontainer container 60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac. Jan 23 18:58:12.177213 kubelet[2810]: E0123 18:58:12.176557 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:58:12.230149 systemd-networkd[1420]: calic33a9d9dad8: Link UP Jan 23 18:58:12.233621 systemd-networkd[1420]: calic33a9d9dad8: Gained carrier Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:11.921 [INFO][4495] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:11.953 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0 coredns-674b8bbfcf- kube-system 45c0d4e6-afb3-4eae-9319-0e865551ed12 850 0 2026-01-23 18:57:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal coredns-674b8bbfcf-hfdpp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic33a9d9dad8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:11.953 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.032 [INFO][4526] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" HandleID="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.033 [INFO][4526] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" HandleID="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-hfdpp", "timestamp":"2026-01-23 18:58:12.0324866 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.033 [INFO][4526] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.049 [INFO][4526] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.049 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.112 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.129 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.143 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.156 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.181 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.181 [INFO][4526] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.186 [INFO][4526] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.199 [INFO][4526] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.220 [INFO][4526] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.198/26] block=192.168.88.192/26 handle="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.220 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.198/26] handle="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.220 [INFO][4526] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:12.275345 containerd[1545]: 2026-01-23 18:58:12.220 [INFO][4526] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.198/26] IPv6=[] ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" HandleID="k8s-pod-network.6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.279698 containerd[1545]: 2026-01-23 18:58:12.223 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"45c0d4e6-afb3-4eae-9319-0e865551ed12", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-hfdpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic33a9d9dad8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:12.279698 containerd[1545]: 2026-01-23 18:58:12.223 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.198/32] ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.279698 containerd[1545]: 2026-01-23 18:58:12.224 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic33a9d9dad8 ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.279698 containerd[1545]: 2026-01-23 18:58:12.233 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.279698 containerd[1545]: 2026-01-23 18:58:12.235 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"45c0d4e6-afb3-4eae-9319-0e865551ed12", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff", Pod:"coredns-674b8bbfcf-hfdpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic33a9d9dad8", MAC:"7e:a3:9d:e0:e3:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:12.279698 containerd[1545]: 2026-01-23 18:58:12.259 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-hfdpp" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--hfdpp-eth0" Jan 23 18:58:12.335649 containerd[1545]: time="2026-01-23T18:58:12.335468719Z" level=info msg="connecting to shim 6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff" address="unix:///run/containerd/s/c0c9d3219d999dd8f067137d68215d6eea0b9d9b6be7148a7c562c0897663ff1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:12.424142 systemd[1]: Started cri-containerd-6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff.scope - libcontainer container 6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff. Jan 23 18:58:12.439375 containerd[1545]: time="2026-01-23T18:58:12.437966788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587dd8bd56-9vqr7,Uid:8800bedc-6975-4cc8-8a9b-9da788a14188,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"60a32f6bda3237c11b1b688886ef034e61e30473df0d9a6b167583efc83f05ac\"" Jan 23 18:58:12.445009 containerd[1545]: time="2026-01-23T18:58:12.444972354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:12.544473 containerd[1545]: time="2026-01-23T18:58:12.544418557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hfdpp,Uid:45c0d4e6-afb3-4eae-9319-0e865551ed12,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff\"" Jan 23 18:58:12.553961 containerd[1545]: time="2026-01-23T18:58:12.553916600Z" level=info msg="CreateContainer within sandbox \"6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:58:12.573234 containerd[1545]: time="2026-01-23T18:58:12.573187384Z" level=info msg="Container 3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:12.594285 containerd[1545]: time="2026-01-23T18:58:12.594040543Z" level=info msg="CreateContainer within sandbox \"6d6b3c3059d4d6ed2f0335e194d2b77f66efdf7187ec6d3853177d74694f10ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20\"" Jan 23 18:58:12.599307 containerd[1545]: time="2026-01-23T18:58:12.598763721Z" level=info msg="StartContainer for \"3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20\"" Jan 23 18:58:12.601676 containerd[1545]: time="2026-01-23T18:58:12.600609387Z" level=info msg="connecting to shim 3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20" address="unix:///run/containerd/s/c0c9d3219d999dd8f067137d68215d6eea0b9d9b6be7148a7c562c0897663ff1" protocol=ttrpc version=3 Jan 23 18:58:12.608560 containerd[1545]: time="2026-01-23T18:58:12.607552988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:12.612227 containerd[1545]: time="2026-01-23T18:58:12.612144954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:12.613204 containerd[1545]: time="2026-01-23T18:58:12.612198004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:12.613510 kubelet[2810]: E0123 18:58:12.613462 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:12.615005 kubelet[2810]: E0123 18:58:12.614060 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:12.615005 kubelet[2810]: E0123 18:58:12.614904 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qqhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587dd8bd56-9vqr7_calico-apiserver(8800bedc-6975-4cc8-8a9b-9da788a14188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:12.616526 kubelet[2810]: E0123 18:58:12.616432 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:58:12.636578 systemd[1]: Started cri-containerd-3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20.scope - libcontainer container 3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20. Jan 23 18:58:12.701928 containerd[1545]: time="2026-01-23T18:58:12.701874601Z" level=info msg="StartContainer for \"3e759b8d134301fdb6c9faec35f3d089f0d50483f809edfde63c6b3497a64d20\" returns successfully" Jan 23 18:58:12.812506 containerd[1545]: time="2026-01-23T18:58:12.812071446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f8898896-r4tmw,Uid:64f0782c-e663-4cd4-b3ff-935ab7f31baa,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:13.131369 systemd-networkd[1420]: calid6995a6d877: Link UP Jan 23 18:58:13.132485 systemd-networkd[1420]: calid6995a6d877: Gained carrier Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.874 [INFO][4684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.910 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0 calico-kube-controllers-8f8898896- calico-system 64f0782c-e663-4cd4-b3ff-935ab7f31baa 854 0 2026-01-23 18:57:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8f8898896 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal calico-kube-controllers-8f8898896-r4tmw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid6995a6d877 [] [] }} ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.911 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.996 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" HandleID="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.997 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" HandleID="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"calico-kube-controllers-8f8898896-r4tmw", "timestamp":"2026-01-23 18:58:12.996136122 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.997 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.997 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:12.997 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.014 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.023 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.033 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.037 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.046 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.047 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.050 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.059 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.076 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.199/26] block=192.168.88.192/26 handle="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.076 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.199/26] handle="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.077 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:13.170573 containerd[1545]: 2026-01-23 18:58:13.077 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.199/26] IPv6=[] ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" HandleID="k8s-pod-network.63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.173905 containerd[1545]: 2026-01-23 18:58:13.091 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0", GenerateName:"calico-kube-controllers-8f8898896-", Namespace:"calico-system", SelfLink:"", UID:"64f0782c-e663-4cd4-b3ff-935ab7f31baa", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f8898896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-8f8898896-r4tmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6995a6d877", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:13.173905 containerd[1545]: 2026-01-23 18:58:13.091 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.199/32] ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.173905 containerd[1545]: 2026-01-23 18:58:13.091 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6995a6d877 ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.173905 containerd[1545]: 2026-01-23 18:58:13.130 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.173905 containerd[1545]: 2026-01-23 18:58:13.132 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0", GenerateName:"calico-kube-controllers-8f8898896-", Namespace:"calico-system", SelfLink:"", UID:"64f0782c-e663-4cd4-b3ff-935ab7f31baa", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f8898896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c", Pod:"calico-kube-controllers-8f8898896-r4tmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6995a6d877", MAC:"0e:f3:b0:ff:57:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:13.173905 containerd[1545]: 2026-01-23 18:58:13.162 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" Namespace="calico-system" Pod="calico-kube-controllers-8f8898896-r4tmw" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-calico--kube--controllers--8f8898896--r4tmw-eth0" Jan 23 18:58:13.184199 kubelet[2810]: E0123 18:58:13.183829 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:58:13.240791 kubelet[2810]: I0123 18:58:13.240002 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hfdpp" podStartSLOduration=43.239974242 podStartE2EDuration="43.239974242s" podCreationTimestamp="2026-01-23 18:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:58:13.239777712 +0000 UTC m=+50.677133150" watchObservedRunningTime="2026-01-23 18:58:13.239974242 +0000 UTC m=+50.677329610" Jan 23 18:58:13.244895 containerd[1545]: time="2026-01-23T18:58:13.244726994Z" level=info msg="connecting to shim 63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c" address="unix:///run/containerd/s/0afcab4f6270bc80227ebf7e923007fb8c1ee666ee56cb95ee4e85c42eace344" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:13.310683 systemd[1]: Started cri-containerd-63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c.scope - libcontainer container 63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c. Jan 23 18:58:13.390182 systemd-networkd[1420]: calic33a9d9dad8: Gained IPv6LL Jan 23 18:58:13.398147 containerd[1545]: time="2026-01-23T18:58:13.398052517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f8898896-r4tmw,Uid:64f0782c-e663-4cd4-b3ff-935ab7f31baa,Namespace:calico-system,Attempt:0,} returns sandbox id \"63b5c5de270b439c655781407bd620f98859fcaf7b0a29c323a78bdb2fdaf72c\"" Jan 23 18:58:13.401438 containerd[1545]: time="2026-01-23T18:58:13.401397548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:13.572377 containerd[1545]: time="2026-01-23T18:58:13.572222376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:13.574360 containerd[1545]: time="2026-01-23T18:58:13.574297176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:13.574501 containerd[1545]: time="2026-01-23T18:58:13.574425520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:13.574816 kubelet[2810]: E0123 18:58:13.574741 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:13.575002 kubelet[2810]: E0123 18:58:13.574834 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:13.575346 kubelet[2810]: E0123 18:58:13.575251 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4kcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8f8898896-r4tmw_calico-system(64f0782c-e663-4cd4-b3ff-935ab7f31baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:13.577454 kubelet[2810]: E0123 18:58:13.577406 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:58:13.805858 containerd[1545]: time="2026-01-23T18:58:13.805699058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4z7p9,Uid:995a2281-49c2-40bf-b075-9d751bff44f2,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:13.971368 systemd-networkd[1420]: cali4b3273642f7: Link UP Jan 23 18:58:13.971685 systemd-networkd[1420]: cali4b3273642f7: Gained carrier Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.848 [INFO][4778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.872 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0 goldmane-666569f655- calico-system 995a2281-49c2-40bf-b075-9d751bff44f2 856 0 2026-01-23 18:57:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal goldmane-666569f655-4z7p9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4b3273642f7 [] [] }} ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.872 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.921 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" HandleID="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.921 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" HandleID="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", "pod":"goldmane-666569f655-4z7p9", "timestamp":"2026-01-23 18:58:13.921549929 +0000 UTC"}, Hostname:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.921 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.921 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.921 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal' Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.931 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.937 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.942 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.944 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.947 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.947 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.949 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.953 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.961 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.200/26] block=192.168.88.192/26 handle="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.962 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.200/26] handle="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" host="ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal" Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.962 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:13.992446 containerd[1545]: 2026-01-23 18:58:13.962 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.200/26] IPv6=[] ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" HandleID="k8s-pod-network.41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Workload="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:13.995781 containerd[1545]: 2026-01-23 18:58:13.964 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"995a2281-49c2-40bf-b075-9d751bff44f2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-666569f655-4z7p9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4b3273642f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:13.995781 containerd[1545]: 2026-01-23 18:58:13.964 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.200/32] ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:13.995781 containerd[1545]: 2026-01-23 18:58:13.964 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b3273642f7 ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:13.995781 containerd[1545]: 2026-01-23 18:58:13.968 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:13.995781 containerd[1545]: 2026-01-23 18:58:13.968 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"995a2281-49c2-40bf-b075-9d751bff44f2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-4a63231b6ba4969e40f9.c.flatcar-212911.internal", ContainerID:"41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b", Pod:"goldmane-666569f655-4z7p9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4b3273642f7", MAC:"66:4b:0f:42:01:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:13.995781 containerd[1545]: 2026-01-23 18:58:13.989 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" Namespace="calico-system" Pod="goldmane-666569f655-4z7p9" WorkloadEndpoint="ci--4459--2--3--4a63231b6ba4969e40f9.c.flatcar--212911.internal-k8s-goldmane--666569f655--4z7p9-eth0" Jan 23 18:58:14.032856 containerd[1545]: time="2026-01-23T18:58:14.032744772Z" level=info msg="connecting to shim 41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b" address="unix:///run/containerd/s/be0a72eef8512baaa0d196ce87b60e73c73d204bb1edc35c103258062588fbd5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:14.066496 systemd[1]: Started cri-containerd-41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b.scope - libcontainer container 41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b. Jan 23 18:58:14.094513 systemd-networkd[1420]: caliaae3c72e2eb: Gained IPv6LL Jan 23 18:58:14.152253 containerd[1545]: time="2026-01-23T18:58:14.152200148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4z7p9,Uid:995a2281-49c2-40bf-b075-9d751bff44f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"41ca00979c8b11dd689b477f0f18779f95313e38ad50ae3f6733a95791caa02b\"" Jan 23 18:58:14.154479 containerd[1545]: time="2026-01-23T18:58:14.154446158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:14.201799 kubelet[2810]: E0123 18:58:14.201599 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:58:14.204361 kubelet[2810]: E0123 18:58:14.203838 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:58:14.221414 systemd-networkd[1420]: calid6995a6d877: Gained IPv6LL Jan 23 18:58:14.322000 containerd[1545]: time="2026-01-23T18:58:14.321850544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:14.324402 containerd[1545]: time="2026-01-23T18:58:14.324257368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:14.325601 containerd[1545]: time="2026-01-23T18:58:14.324317435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:14.325718 kubelet[2810]: E0123 18:58:14.325536 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:14.325967 kubelet[2810]: E0123 18:58:14.325842 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:14.326463 kubelet[2810]: E0123 18:58:14.326364 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jw6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4z7p9_calico-system(995a2281-49c2-40bf-b075-9d751bff44f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:14.327603 kubelet[2810]: E0123 18:58:14.327488 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:58:15.118631 systemd-networkd[1420]: cali4b3273642f7: Gained IPv6LL Jan 23 18:58:15.204876 kubelet[2810]: E0123 18:58:15.204792 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:58:15.208004 kubelet[2810]: E0123 18:58:15.205653 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:58:16.086463 kubelet[2810]: I0123 18:58:16.085997 2810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:58:17.332003 systemd-networkd[1420]: vxlan.calico: Link UP Jan 23 18:58:17.332022 systemd-networkd[1420]: vxlan.calico: Gained carrier Jan 23 18:58:19.149487 systemd-networkd[1420]: vxlan.calico: Gained IPv6LL Jan 23 18:58:19.807857 containerd[1545]: time="2026-01-23T18:58:19.807778235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:19.977328 containerd[1545]: time="2026-01-23T18:58:19.977265432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:19.979067 containerd[1545]: time="2026-01-23T18:58:19.978981839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:19.979256 containerd[1545]: time="2026-01-23T18:58:19.979088085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:58:19.979378 kubelet[2810]: E0123 18:58:19.979325 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:19.979844 kubelet[2810]: E0123 18:58:19.979393 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:19.979844 kubelet[2810]: E0123 18:58:19.979568 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf6a28650de424abc477daf1038e0ae,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqcfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dbdb8cb8d-x4l8g_calico-system(e349b807-19f1-4df8-a846-f2bc79a618bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:19.983032 containerd[1545]: time="2026-01-23T18:58:19.982990993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:20.139991 containerd[1545]: time="2026-01-23T18:58:20.139817622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:20.142227 containerd[1545]: time="2026-01-23T18:58:20.141291079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:20.142227 containerd[1545]: time="2026-01-23T18:58:20.141396127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:20.142435 kubelet[2810]: E0123 18:58:20.141570 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:20.142435 kubelet[2810]: E0123 18:58:20.141629 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:20.142435 kubelet[2810]: E0123 18:58:20.141839 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqcfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dbdb8cb8d-x4l8g_calico-system(e349b807-19f1-4df8-a846-f2bc79a618bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:20.143364 kubelet[2810]: E0123 18:58:20.143301 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:58:20.807951 containerd[1545]: time="2026-01-23T18:58:20.807867157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:20.964709 containerd[1545]: time="2026-01-23T18:58:20.964635841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:20.966479 containerd[1545]: time="2026-01-23T18:58:20.966364320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:20.966840 containerd[1545]: time="2026-01-23T18:58:20.966371470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:58:20.967124 kubelet[2810]: E0123 18:58:20.967016 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:20.967124 kubelet[2810]: E0123 18:58:20.967079 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:20.967751 kubelet[2810]: E0123 18:58:20.967623 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrwww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:20.971008 containerd[1545]: time="2026-01-23T18:58:20.970774561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:21.145873 containerd[1545]: time="2026-01-23T18:58:21.145695241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:21.147459 containerd[1545]: time="2026-01-23T18:58:21.147378662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:21.147764 containerd[1545]: time="2026-01-23T18:58:21.147416359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:58:21.147846 kubelet[2810]: E0123 18:58:21.147725 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:21.148432 kubelet[2810]: E0123 18:58:21.147851 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:21.148622 kubelet[2810]: E0123 18:58:21.148447 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrwww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:21.149925 kubelet[2810]: E0123 18:58:21.149857 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:58:21.666706 ntpd[1688]: Listen normally on 6 vxlan.calico 192.168.88.192:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 6 vxlan.calico 192.168.88.192:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 7 cali4c17302770d [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 8 cali019f81d3172 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 9 cali2ad63d66586 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 10 cali8cb0b7bc300 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 11 caliaae3c72e2eb [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 12 calic33a9d9dad8 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 13 calid6995a6d877 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 14 cali4b3273642f7 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 18:58:21.667341 ntpd[1688]: 23 Jan 18:58:21 ntpd[1688]: Listen normally on 15 vxlan.calico [fe80::64cb:fff:fe45:60cc%12]:123 Jan 23 18:58:21.666824 ntpd[1688]: Listen normally on 7 cali4c17302770d [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 18:58:21.666867 ntpd[1688]: Listen normally on 8 cali019f81d3172 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 23 18:58:21.666913 ntpd[1688]: Listen normally on 9 cali2ad63d66586 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 18:58:21.666954 ntpd[1688]: Listen normally on 10 cali8cb0b7bc300 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 18:58:21.667014 ntpd[1688]: Listen normally on 11 caliaae3c72e2eb [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 18:58:21.667058 ntpd[1688]: Listen normally on 12 calic33a9d9dad8 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 18:58:21.667097 ntpd[1688]: Listen normally on 13 calid6995a6d877 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 18:58:21.667136 ntpd[1688]: Listen normally on 14 cali4b3273642f7 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 18:58:21.667193 ntpd[1688]: Listen normally on 15 vxlan.calico [fe80::64cb:fff:fe45:60cc%12]:123 Jan 23 18:58:22.808695 containerd[1545]: time="2026-01-23T18:58:22.808158028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:22.964767 containerd[1545]: time="2026-01-23T18:58:22.964693107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:22.966381 containerd[1545]: time="2026-01-23T18:58:22.966329745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:22.966582 containerd[1545]: time="2026-01-23T18:58:22.966436892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:22.966805 kubelet[2810]: E0123 18:58:22.966742 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:22.967792 kubelet[2810]: E0123 18:58:22.966807 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:22.967792 kubelet[2810]: E0123 18:58:22.967010 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587dd8bd56-xf4xr_calico-apiserver(8b961e3b-935a-4759-813c-935dbe2acf0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:22.968329 kubelet[2810]: E0123 18:58:22.968285 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:58:26.808318 containerd[1545]: time="2026-01-23T18:58:26.808229506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:26.965379 containerd[1545]: time="2026-01-23T18:58:26.965275336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:26.967332 containerd[1545]: time="2026-01-23T18:58:26.967230951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:26.967620 containerd[1545]: time="2026-01-23T18:58:26.967269465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:26.967872 kubelet[2810]: E0123 18:58:26.967805 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:26.968915 kubelet[2810]: E0123 18:58:26.967891 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:26.968915 kubelet[2810]: E0123 18:58:26.968846 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jw6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4z7p9_calico-system(995a2281-49c2-40bf-b075-9d751bff44f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:26.970110 kubelet[2810]: E0123 18:58:26.970053 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:58:29.401819 systemd[1]: Started sshd@10-10.128.0.7:22-4.153.228.146:45254.service - OpenSSH per-connection server daemon (4.153.228.146:45254). Jan 23 18:58:29.639223 sshd[5052]: Accepted publickey for core from 4.153.228.146 port 45254 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:29.640301 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:29.649275 systemd-logind[1526]: New session 10 of user core. Jan 23 18:58:29.654476 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:58:29.810194 containerd[1545]: time="2026-01-23T18:58:29.808815679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:29.955719 sshd[5055]: Connection closed by 4.153.228.146 port 45254 Jan 23 18:58:29.956371 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:29.963586 systemd[1]: sshd@10-10.128.0.7:22-4.153.228.146:45254.service: Deactivated successfully. Jan 23 18:58:29.967720 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:58:29.970479 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:58:29.973765 systemd-logind[1526]: Removed session 10. Jan 23 18:58:29.976781 containerd[1545]: time="2026-01-23T18:58:29.976573830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:29.978314 containerd[1545]: time="2026-01-23T18:58:29.978250752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:29.979222 containerd[1545]: time="2026-01-23T18:58:29.978537838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:29.979345 kubelet[2810]: E0123 18:58:29.978711 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:29.979345 kubelet[2810]: E0123 18:58:29.978782 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:29.979345 kubelet[2810]: E0123 18:58:29.979088 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4kcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8f8898896-r4tmw_calico-system(64f0782c-e663-4cd4-b3ff-935ab7f31baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:29.980509 containerd[1545]: time="2026-01-23T18:58:29.980398178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:29.980910 kubelet[2810]: E0123 18:58:29.980601 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:58:30.140077 containerd[1545]: time="2026-01-23T18:58:30.140008083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:30.141934 containerd[1545]: time="2026-01-23T18:58:30.141787868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:30.141934 containerd[1545]: time="2026-01-23T18:58:30.141822379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:30.142500 kubelet[2810]: E0123 18:58:30.142127 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:30.142500 kubelet[2810]: E0123 18:58:30.142219 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:30.142500 kubelet[2810]: E0123 18:58:30.142362 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qqhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587dd8bd56-9vqr7_calico-apiserver(8800bedc-6975-4cc8-8a9b-9da788a14188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:30.143735 kubelet[2810]: E0123 18:58:30.143683 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:58:31.809125 kubelet[2810]: E0123 18:58:31.808671 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:58:33.807767 kubelet[2810]: E0123 18:58:33.807664 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:58:35.006908 systemd[1]: Started sshd@11-10.128.0.7:22-4.153.228.146:54418.service - OpenSSH per-connection server daemon (4.153.228.146:54418). Jan 23 18:58:35.278277 sshd[5070]: Accepted publickey for core from 4.153.228.146 port 54418 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:35.279160 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:35.286329 systemd-logind[1526]: New session 11 of user core. Jan 23 18:58:35.292453 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:58:35.544071 sshd[5073]: Connection closed by 4.153.228.146 port 54418 Jan 23 18:58:35.544976 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:35.550724 systemd[1]: sshd@11-10.128.0.7:22-4.153.228.146:54418.service: Deactivated successfully. Jan 23 18:58:35.553930 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:58:35.557636 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:58:35.559670 systemd-logind[1526]: Removed session 11. Jan 23 18:58:36.808930 kubelet[2810]: E0123 18:58:36.807648 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:58:40.593572 systemd[1]: Started sshd@12-10.128.0.7:22-4.153.228.146:54434.service - OpenSSH per-connection server daemon (4.153.228.146:54434). Jan 23 18:58:40.859221 sshd[5124]: Accepted publickey for core from 4.153.228.146 port 54434 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:40.861345 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:40.870093 systemd-logind[1526]: New session 12 of user core. Jan 23 18:58:40.874462 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:58:41.129081 sshd[5127]: Connection closed by 4.153.228.146 port 54434 Jan 23 18:58:41.130196 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:41.137024 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:58:41.137619 systemd[1]: sshd@12-10.128.0.7:22-4.153.228.146:54434.service: Deactivated successfully. Jan 23 18:58:41.141321 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:58:41.144264 systemd-logind[1526]: Removed session 12. Jan 23 18:58:41.176968 systemd[1]: Started sshd@13-10.128.0.7:22-4.153.228.146:54446.service - OpenSSH per-connection server daemon (4.153.228.146:54446). Jan 23 18:58:41.411413 sshd[5141]: Accepted publickey for core from 4.153.228.146 port 54446 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:41.413334 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:41.420247 systemd-logind[1526]: New session 13 of user core. Jan 23 18:58:41.427508 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:58:41.699527 sshd[5144]: Connection closed by 4.153.228.146 port 54446 Jan 23 18:58:41.700550 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:41.709871 systemd[1]: sshd@13-10.128.0.7:22-4.153.228.146:54446.service: Deactivated successfully. Jan 23 18:58:41.716295 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:58:41.721443 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:58:41.723596 systemd-logind[1526]: Removed session 13. Jan 23 18:58:41.747534 systemd[1]: Started sshd@14-10.128.0.7:22-4.153.228.146:54458.service - OpenSSH per-connection server daemon (4.153.228.146:54458). Jan 23 18:58:41.807746 kubelet[2810]: E0123 18:58:41.807294 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:58:42.002686 sshd[5154]: Accepted publickey for core from 4.153.228.146 port 54458 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:42.004609 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:42.012221 systemd-logind[1526]: New session 14 of user core. Jan 23 18:58:42.017465 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:58:42.268858 sshd[5157]: Connection closed by 4.153.228.146 port 54458 Jan 23 18:58:42.270341 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:42.276991 systemd[1]: sshd@14-10.128.0.7:22-4.153.228.146:54458.service: Deactivated successfully. Jan 23 18:58:42.280614 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:58:42.282884 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:58:42.285266 systemd-logind[1526]: Removed session 14. Jan 23 18:58:42.808928 kubelet[2810]: E0123 18:58:42.808865 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:58:42.811222 containerd[1545]: time="2026-01-23T18:58:42.809509890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:42.990644 containerd[1545]: time="2026-01-23T18:58:42.990424557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:42.992264 containerd[1545]: time="2026-01-23T18:58:42.992192980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:58:42.992572 containerd[1545]: time="2026-01-23T18:58:42.992230146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:42.992743 kubelet[2810]: E0123 18:58:42.992680 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:42.992828 kubelet[2810]: E0123 18:58:42.992742 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:42.992997 kubelet[2810]: E0123 18:58:42.992939 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf6a28650de424abc477daf1038e0ae,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqcfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dbdb8cb8d-x4l8g_calico-system(e349b807-19f1-4df8-a846-f2bc79a618bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:42.997158 containerd[1545]: time="2026-01-23T18:58:42.997076103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:43.159006 containerd[1545]: time="2026-01-23T18:58:43.158399613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:43.160214 containerd[1545]: time="2026-01-23T18:58:43.160128910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:43.160433 containerd[1545]: time="2026-01-23T18:58:43.160150938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:43.160697 kubelet[2810]: E0123 18:58:43.160630 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:43.160802 kubelet[2810]: E0123 18:58:43.160699 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:43.160966 kubelet[2810]: E0123 18:58:43.160890 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqcfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dbdb8cb8d-x4l8g_calico-system(e349b807-19f1-4df8-a846-f2bc79a618bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:43.162313 kubelet[2810]: E0123 18:58:43.162233 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:58:43.806493 kubelet[2810]: E0123 18:58:43.806364 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:58:47.320567 systemd[1]: Started sshd@15-10.128.0.7:22-4.153.228.146:36492.service - OpenSSH per-connection server daemon (4.153.228.146:36492). Jan 23 18:58:47.589598 sshd[5173]: Accepted publickey for core from 4.153.228.146 port 36492 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:47.591517 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:47.598261 systemd-logind[1526]: New session 15 of user core. Jan 23 18:58:47.607682 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:58:47.810520 containerd[1545]: time="2026-01-23T18:58:47.810208600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:47.875504 sshd[5176]: Connection closed by 4.153.228.146 port 36492 Jan 23 18:58:47.876701 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:47.888708 systemd[1]: sshd@15-10.128.0.7:22-4.153.228.146:36492.service: Deactivated successfully. Jan 23 18:58:47.891902 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:58:47.893560 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:58:47.896363 systemd-logind[1526]: Removed session 15. Jan 23 18:58:47.975594 containerd[1545]: time="2026-01-23T18:58:47.975503770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:47.977553 containerd[1545]: time="2026-01-23T18:58:47.977390591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:47.977553 containerd[1545]: time="2026-01-23T18:58:47.977444314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:58:47.978004 kubelet[2810]: E0123 18:58:47.977937 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:47.978004 kubelet[2810]: E0123 18:58:47.977999 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:47.978910 kubelet[2810]: E0123 18:58:47.978827 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrwww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:47.982009 containerd[1545]: time="2026-01-23T18:58:47.981678117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:48.141295 containerd[1545]: time="2026-01-23T18:58:48.141006010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:48.143895 containerd[1545]: time="2026-01-23T18:58:48.143677310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:48.143895 containerd[1545]: time="2026-01-23T18:58:48.143693926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:58:48.144517 kubelet[2810]: E0123 18:58:48.144355 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:48.144651 kubelet[2810]: E0123 18:58:48.144501 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:48.145623 kubelet[2810]: E0123 18:58:48.145363 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrwww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g5nws_calico-system(1aa00049-b6aa-4c4a-9b9a-78530a9aeb40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:48.147115 kubelet[2810]: E0123 18:58:48.147029 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:58:50.808299 containerd[1545]: time="2026-01-23T18:58:50.807880741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:50.981642 containerd[1545]: time="2026-01-23T18:58:50.981539822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:50.983141 containerd[1545]: time="2026-01-23T18:58:50.983073760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:50.983308 containerd[1545]: time="2026-01-23T18:58:50.983209222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:50.983484 kubelet[2810]: E0123 18:58:50.983419 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:50.984257 kubelet[2810]: E0123 18:58:50.983491 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:50.984257 kubelet[2810]: E0123 18:58:50.983692 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587dd8bd56-xf4xr_calico-apiserver(8b961e3b-935a-4759-813c-935dbe2acf0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:50.985476 kubelet[2810]: E0123 18:58:50.985419 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:58:52.924778 systemd[1]: Started sshd@16-10.128.0.7:22-4.153.228.146:36494.service - OpenSSH per-connection server daemon (4.153.228.146:36494). Jan 23 18:58:53.170644 sshd[5189]: Accepted publickey for core from 4.153.228.146 port 36494 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:53.172386 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:53.178519 systemd-logind[1526]: New session 16 of user core. Jan 23 18:58:53.193394 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:58:53.424876 sshd[5192]: Connection closed by 4.153.228.146 port 36494 Jan 23 18:58:53.426488 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:53.432893 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:58:53.433538 systemd[1]: sshd@16-10.128.0.7:22-4.153.228.146:36494.service: Deactivated successfully. Jan 23 18:58:53.436703 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:58:53.439959 systemd-logind[1526]: Removed session 16. Jan 23 18:58:54.809378 kubelet[2810]: E0123 18:58:54.809304 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:58:55.806354 containerd[1545]: time="2026-01-23T18:58:55.806219516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:55.966113 containerd[1545]: time="2026-01-23T18:58:55.966027542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:55.969027 containerd[1545]: time="2026-01-23T18:58:55.968943677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:55.969223 containerd[1545]: time="2026-01-23T18:58:55.968965536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:55.969503 kubelet[2810]: E0123 18:58:55.969421 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:55.970041 kubelet[2810]: E0123 18:58:55.969502 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:55.970426 kubelet[2810]: E0123 18:58:55.970316 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jw6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4z7p9_calico-system(995a2281-49c2-40bf-b075-9d751bff44f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:55.971831 kubelet[2810]: E0123 18:58:55.971783 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:58:56.821201 containerd[1545]: time="2026-01-23T18:58:56.820081507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:56.997353 containerd[1545]: time="2026-01-23T18:58:56.997290714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:56.999012 containerd[1545]: time="2026-01-23T18:58:56.998948901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:56.999285 containerd[1545]: time="2026-01-23T18:58:56.998968494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:56.999505 kubelet[2810]: E0123 18:58:56.999428 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:57.000215 kubelet[2810]: E0123 18:58:56.999513 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:57.000215 kubelet[2810]: E0123 18:58:56.999757 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4kcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8f8898896-r4tmw_calico-system(64f0782c-e663-4cd4-b3ff-935ab7f31baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:57.001096 kubelet[2810]: E0123 18:58:57.000997 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:58:58.469564 systemd[1]: Started sshd@17-10.128.0.7:22-4.153.228.146:44448.service - OpenSSH per-connection server daemon (4.153.228.146:44448). Jan 23 18:58:58.698623 sshd[5212]: Accepted publickey for core from 4.153.228.146 port 44448 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:58:58.700347 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:58.707295 systemd-logind[1526]: New session 17 of user core. Jan 23 18:58:58.715430 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:58:58.808460 containerd[1545]: time="2026-01-23T18:58:58.808399912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:58.961673 sshd[5215]: Connection closed by 4.153.228.146 port 44448 Jan 23 18:58:58.963464 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:58.969205 containerd[1545]: time="2026-01-23T18:58:58.969027054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:58.969923 systemd[1]: sshd@17-10.128.0.7:22-4.153.228.146:44448.service: Deactivated successfully. Jan 23 18:58:58.971859 containerd[1545]: time="2026-01-23T18:58:58.971272300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:58.971859 containerd[1545]: time="2026-01-23T18:58:58.971332636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:58.972040 kubelet[2810]: E0123 18:58:58.971598 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:58.972040 kubelet[2810]: E0123 18:58:58.971676 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:58.972040 kubelet[2810]: E0123 18:58:58.971924 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qqhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587dd8bd56-9vqr7_calico-apiserver(8800bedc-6975-4cc8-8a9b-9da788a14188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:58.974006 kubelet[2810]: E0123 18:58:58.973931 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:58:58.976263 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:58:58.979247 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:58:58.981584 systemd-logind[1526]: Removed session 17. Jan 23 18:59:02.811044 kubelet[2810]: E0123 18:59:02.810940 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:59:04.004791 systemd[1]: Started sshd@18-10.128.0.7:22-4.153.228.146:44458.service - OpenSSH per-connection server daemon (4.153.228.146:44458). Jan 23 18:59:04.237993 sshd[5230]: Accepted publickey for core from 4.153.228.146 port 44458 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:04.240030 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:04.246337 systemd-logind[1526]: New session 18 of user core. Jan 23 18:59:04.255429 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:59:04.491359 sshd[5233]: Connection closed by 4.153.228.146 port 44458 Jan 23 18:59:04.492515 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:04.498826 systemd[1]: sshd@18-10.128.0.7:22-4.153.228.146:44458.service: Deactivated successfully. Jan 23 18:59:04.502115 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:59:04.504048 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:59:04.506008 systemd-logind[1526]: Removed session 18. Jan 23 18:59:04.533258 systemd[1]: Started sshd@19-10.128.0.7:22-4.153.228.146:36878.service - OpenSSH per-connection server daemon (4.153.228.146:36878). Jan 23 18:59:04.765431 sshd[5245]: Accepted publickey for core from 4.153.228.146 port 36878 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:04.767820 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:04.776616 systemd-logind[1526]: New session 19 of user core. Jan 23 18:59:04.781380 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:59:04.811524 kubelet[2810]: E0123 18:59:04.811348 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:59:05.066119 sshd[5248]: Connection closed by 4.153.228.146 port 36878 Jan 23 18:59:05.067113 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:05.074027 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:59:05.074449 systemd[1]: sshd@19-10.128.0.7:22-4.153.228.146:36878.service: Deactivated successfully. Jan 23 18:59:05.078005 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:59:05.081211 systemd-logind[1526]: Removed session 19. Jan 23 18:59:05.113543 systemd[1]: Started sshd@20-10.128.0.7:22-4.153.228.146:36882.service - OpenSSH per-connection server daemon (4.153.228.146:36882). Jan 23 18:59:05.348681 sshd[5258]: Accepted publickey for core from 4.153.228.146 port 36882 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:05.350945 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:05.360445 systemd-logind[1526]: New session 20 of user core. Jan 23 18:59:05.368379 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:59:06.184665 sshd[5261]: Connection closed by 4.153.228.146 port 36882 Jan 23 18:59:06.186441 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:06.197651 systemd[1]: sshd@20-10.128.0.7:22-4.153.228.146:36882.service: Deactivated successfully. Jan 23 18:59:06.197975 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:59:06.203900 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:59:06.209014 systemd-logind[1526]: Removed session 20. Jan 23 18:59:06.238544 systemd[1]: Started sshd@21-10.128.0.7:22-4.153.228.146:36886.service - OpenSSH per-connection server daemon (4.153.228.146:36886). Jan 23 18:59:06.500991 sshd[5278]: Accepted publickey for core from 4.153.228.146 port 36886 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:06.502539 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:06.510275 systemd-logind[1526]: New session 21 of user core. Jan 23 18:59:06.515426 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:59:06.937124 sshd[5281]: Connection closed by 4.153.228.146 port 36886 Jan 23 18:59:06.938536 sshd-session[5278]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:06.943681 systemd[1]: sshd@21-10.128.0.7:22-4.153.228.146:36886.service: Deactivated successfully. Jan 23 18:59:06.946939 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:59:06.949606 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:59:06.951987 systemd-logind[1526]: Removed session 21. Jan 23 18:59:06.980719 systemd[1]: Started sshd@22-10.128.0.7:22-4.153.228.146:36900.service - OpenSSH per-connection server daemon (4.153.228.146:36900). Jan 23 18:59:07.224085 sshd[5291]: Accepted publickey for core from 4.153.228.146 port 36900 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:07.227201 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:07.236344 systemd-logind[1526]: New session 22 of user core. Jan 23 18:59:07.245202 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:59:07.472555 sshd[5320]: Connection closed by 4.153.228.146 port 36900 Jan 23 18:59:07.475008 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:07.480666 systemd[1]: sshd@22-10.128.0.7:22-4.153.228.146:36900.service: Deactivated successfully. Jan 23 18:59:07.484036 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:59:07.485646 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:59:07.488806 systemd-logind[1526]: Removed session 22. Jan 23 18:59:07.806570 kubelet[2810]: E0123 18:59:07.806361 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:59:09.807201 kubelet[2810]: E0123 18:59:09.807051 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:59:11.807243 kubelet[2810]: E0123 18:59:11.807114 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa" Jan 23 18:59:12.517472 systemd[1]: Started sshd@23-10.128.0.7:22-4.153.228.146:36912.service - OpenSSH per-connection server daemon (4.153.228.146:36912). Jan 23 18:59:12.756822 sshd[5332]: Accepted publickey for core from 4.153.228.146 port 36912 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:12.758700 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:12.767271 systemd-logind[1526]: New session 23 of user core. Jan 23 18:59:12.772413 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:59:13.014961 sshd[5335]: Connection closed by 4.153.228.146 port 36912 Jan 23 18:59:13.016517 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:13.022032 systemd[1]: sshd@23-10.128.0.7:22-4.153.228.146:36912.service: Deactivated successfully. Jan 23 18:59:13.025524 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:59:13.028984 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:59:13.031430 systemd-logind[1526]: Removed session 23. Jan 23 18:59:13.806989 kubelet[2810]: E0123 18:59:13.806854 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:59:13.809056 kubelet[2810]: E0123 18:59:13.808812 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g5nws" podUID="1aa00049-b6aa-4c4a-9b9a-78530a9aeb40" Jan 23 18:59:17.805803 kubelet[2810]: E0123 18:59:17.805676 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-xf4xr" podUID="8b961e3b-935a-4759-813c-935dbe2acf0e" Jan 23 18:59:18.065491 systemd[1]: Started sshd@24-10.128.0.7:22-4.153.228.146:36840.service - OpenSSH per-connection server daemon (4.153.228.146:36840). Jan 23 18:59:18.346816 sshd[5351]: Accepted publickey for core from 4.153.228.146 port 36840 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:18.348459 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:18.356248 systemd-logind[1526]: New session 24 of user core. Jan 23 18:59:18.361387 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:59:18.624874 sshd[5354]: Connection closed by 4.153.228.146 port 36840 Jan 23 18:59:18.625780 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:18.633090 systemd[1]: sshd@24-10.128.0.7:22-4.153.228.146:36840.service: Deactivated successfully. Jan 23 18:59:18.636715 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:59:18.638285 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:59:18.640995 systemd-logind[1526]: Removed session 24. Jan 23 18:59:19.807870 kubelet[2810]: E0123 18:59:19.807472 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4z7p9" podUID="995a2281-49c2-40bf-b075-9d751bff44f2" Jan 23 18:59:22.815005 kubelet[2810]: E0123 18:59:22.814906 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dbdb8cb8d-x4l8g" podUID="e349b807-19f1-4df8-a846-f2bc79a618bc" Jan 23 18:59:23.668335 systemd[1]: Started sshd@25-10.128.0.7:22-4.153.228.146:36850.service - OpenSSH per-connection server daemon (4.153.228.146:36850). Jan 23 18:59:23.907661 sshd[5368]: Accepted publickey for core from 4.153.228.146 port 36850 ssh2: RSA SHA256:JpbtWgcs/bT1Of3u3Cg3/JeExdcQBZESokAhS8cweEE Jan 23 18:59:23.909554 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:23.917288 systemd-logind[1526]: New session 25 of user core. Jan 23 18:59:23.925440 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:59:24.212330 sshd[5371]: Connection closed by 4.153.228.146 port 36850 Jan 23 18:59:24.214005 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:24.224099 systemd[1]: sshd@25-10.128.0.7:22-4.153.228.146:36850.service: Deactivated successfully. Jan 23 18:59:24.224239 systemd-logind[1526]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:59:24.230459 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:59:24.236627 systemd-logind[1526]: Removed session 25. Jan 23 18:59:24.810500 kubelet[2810]: E0123 18:59:24.810433 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587dd8bd56-9vqr7" podUID="8800bedc-6975-4cc8-8a9b-9da788a14188" Jan 23 18:59:25.809500 kubelet[2810]: E0123 18:59:25.808129 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f8898896-r4tmw" podUID="64f0782c-e663-4cd4-b3ff-935ab7f31baa"