Jan 23 01:08:42.172783 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:08:42.172831 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:08:42.172855 kernel: BIOS-provided physical RAM map: Jan 23 01:08:42.172869 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 23 01:08:42.172881 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 23 01:08:42.172894 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 23 01:08:42.172910 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 23 01:08:42.172924 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 23 01:08:42.172937 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Jan 23 01:08:42.172955 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Jan 23 01:08:42.172968 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Jan 23 01:08:42.172982 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 23 01:08:42.172998 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 23 01:08:42.173012 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 23 01:08:42.173030 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 23 01:08:42.173049 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 23 01:08:42.173062 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 23 01:08:42.173077 kernel: NX (Execute Disable) protection: active Jan 23 01:08:42.173091 kernel: APIC: Static calls initialized Jan 23 01:08:42.173106 kernel: efi: EFI v2.7 by EDK II Jan 23 01:08:42.173120 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Jan 23 01:08:42.173135 kernel: random: crng init done Jan 23 01:08:42.173148 kernel: secureboot: Secure boot disabled Jan 23 01:08:42.173163 kernel: SMBIOS 2.4 present. Jan 23 01:08:42.173190 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 23 01:08:42.173212 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:08:42.173228 kernel: Hypervisor detected: KVM Jan 23 01:08:42.173244 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 23 01:08:42.173260 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:08:42.173276 kernel: kvm-clock: using sched offset of 16198720830 cycles Jan 23 01:08:42.173293 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:08:42.173310 kernel: tsc: Detected 2299.998 MHz processor Jan 23 01:08:42.173327 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:08:42.173344 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:08:42.173360 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 23 01:08:42.173380 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 23 01:08:42.173397 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:08:42.173414 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 23 01:08:42.173428 kernel: Using GB pages for direct mapping Jan 23 01:08:42.173443 kernel: ACPI: Early table checksum verification disabled Jan 23 01:08:42.173466 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 23 01:08:42.173483 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 23 01:08:42.173504 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 23 01:08:42.173522 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 23 01:08:42.174620 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 23 01:08:42.174640 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 23 01:08:42.174655 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 23 01:08:42.174670 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 23 01:08:42.174687 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 23 01:08:42.174713 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 23 01:08:42.174731 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 23 01:08:42.174749 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 23 01:08:42.174767 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 23 01:08:42.174785 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 23 01:08:42.174802 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 23 01:08:42.174820 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 23 01:08:42.174838 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 23 01:08:42.174856 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 23 01:08:42.174877 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 23 01:08:42.174895 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 23 01:08:42.174914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 01:08:42.174932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 23 01:08:42.174948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 23 01:08:42.174966 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jan 23 01:08:42.174985 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jan 23 01:08:42.175002 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jan 23 01:08:42.175021 kernel: Zone ranges: Jan 23 01:08:42.175045 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:08:42.175064 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:08:42.175083 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 23 01:08:42.175102 kernel: Device empty Jan 23 01:08:42.175120 kernel: Movable zone start for each node Jan 23 01:08:42.175140 kernel: Early memory node ranges Jan 23 01:08:42.175158 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 23 01:08:42.175257 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 23 01:08:42.175304 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Jan 23 01:08:42.175336 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Jan 23 01:08:42.175355 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 23 01:08:42.175375 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 23 01:08:42.175393 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 23 01:08:42.175412 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:08:42.175431 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 23 01:08:42.175448 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 23 01:08:42.175468 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Jan 23 01:08:42.175487 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 01:08:42.175510 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 23 01:08:42.175544 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 01:08:42.175579 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:08:42.175598 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:08:42.175618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:08:42.175637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:08:42.175656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:08:42.175676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:08:42.175695 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:08:42.175721 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:08:42.175740 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:08:42.175759 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:08:42.175779 kernel: CPU topo: Max. threads per core: 2 Jan 23 01:08:42.175798 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:08:42.175818 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:08:42.175836 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:08:42.175855 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 23 01:08:42.175875 kernel: Booting paravirtualized kernel on KVM Jan 23 01:08:42.175893 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:08:42.175914 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:08:42.175934 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:08:42.175953 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:08:42.175972 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:08:42.175991 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:08:42.176011 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:08:42.176033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:08:42.176053 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 01:08:42.176076 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:08:42.176095 kernel: Fallback order for Node 0: 0 Jan 23 01:08:42.176115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Jan 23 01:08:42.176134 kernel: Policy zone: Normal Jan 23 01:08:42.176154 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:08:42.176175 kernel: software IO TLB: area num 2. Jan 23 01:08:42.176224 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:08:42.176250 kernel: Kernel/User page tables isolation: enabled Jan 23 01:08:42.176271 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:08:42.176291 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:08:42.176311 kernel: Dynamic Preempt: voluntary Jan 23 01:08:42.176331 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:08:42.176358 kernel: rcu: RCU event tracing is enabled. Jan 23 01:08:42.176380 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:08:42.176400 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:08:42.176422 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:08:42.176442 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:08:42.176466 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:08:42.176488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:08:42.176508 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:08:42.177518 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:08:42.177576 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:08:42.177594 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:08:42.177613 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:08:42.177632 kernel: Console: colour dummy device 80x25 Jan 23 01:08:42.177659 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:08:42.177678 kernel: ACPI: Core revision 20240827 Jan 23 01:08:42.177697 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:08:42.177716 kernel: x2apic enabled Jan 23 01:08:42.177735 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:08:42.177755 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 23 01:08:42.177774 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 23 01:08:42.177793 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 23 01:08:42.177813 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 23 01:08:42.177832 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 23 01:08:42.177855 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:08:42.177874 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 01:08:42.177893 kernel: Spectre V2 : Mitigation: IBRS Jan 23 01:08:42.177911 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:08:42.177930 kernel: RETBleed: Mitigation: IBRS Jan 23 01:08:42.177949 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:08:42.177968 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 23 01:08:42.177987 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:08:42.178010 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 01:08:42.178029 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:08:42.178048 kernel: active return thunk: its_return_thunk Jan 23 01:08:42.178067 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:08:42.178086 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:08:42.178104 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:08:42.178124 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:08:42.178143 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:08:42.178169 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 01:08:42.178191 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:08:42.178234 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:08:42.178253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:08:42.178272 kernel: landlock: Up and running. Jan 23 01:08:42.178291 kernel: SELinux: Initializing. Jan 23 01:08:42.178310 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:08:42.178329 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:08:42.178348 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 23 01:08:42.178367 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 23 01:08:42.178390 kernel: signal: max sigframe size: 1776 Jan 23 01:08:42.178409 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:08:42.178429 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:08:42.178449 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:08:42.178466 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:08:42.178485 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:08:42.178504 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:08:42.179560 kernel: .... node #0, CPUs: #1 Jan 23 01:08:42.179598 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 01:08:42.179627 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 01:08:42.179647 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:08:42.179666 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 23 01:08:42.179686 kernel: Memory: 7555808K/7860544K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 298904K reserved, 0K cma-reserved) Jan 23 01:08:42.179705 kernel: devtmpfs: initialized Jan 23 01:08:42.179723 kernel: x86/mm: Memory block size: 128MB Jan 23 01:08:42.179742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 23 01:08:42.179761 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:08:42.179784 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:08:42.179802 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:08:42.179822 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:08:42.179840 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:08:42.179860 kernel: audit: type=2000 audit(1769130516.419:1): state=initialized audit_enabled=0 res=1 Jan 23 01:08:42.179878 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:08:42.179897 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:08:42.179916 kernel: cpuidle: using governor menu Jan 23 01:08:42.179935 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:08:42.179957 kernel: dca service started, version 1.12.1 Jan 23 01:08:42.179997 kernel: PCI: Using configuration type 1 for base access Jan 23 01:08:42.180016 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:08:42.180034 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:08:42.180053 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:08:42.180072 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:08:42.180090 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:08:42.180108 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:08:42.180127 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:08:42.180150 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:08:42.180168 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 01:08:42.180187 kernel: ACPI: Interpreter enabled Jan 23 01:08:42.180206 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:08:42.180230 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:08:42.180249 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:08:42.180268 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 01:08:42.180286 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 23 01:08:42.180304 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:08:42.180619 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:08:42.180838 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 01:08:42.181018 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 01:08:42.181038 kernel: PCI host bridge to bus 0000:00 Jan 23 01:08:42.181250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:08:42.181425 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:08:42.185733 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:08:42.185941 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 23 01:08:42.186297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:08:42.187910 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:08:42.188941 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:08:42.189808 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 23 01:08:42.192662 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 01:08:42.192908 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jan 23 01:08:42.193133 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 01:08:42.193325 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jan 23 01:08:42.194908 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:08:42.195142 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jan 23 01:08:42.195356 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jan 23 01:08:42.195602 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 01:08:42.195825 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jan 23 01:08:42.196015 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jan 23 01:08:42.196039 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:08:42.196058 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:08:42.196077 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:08:42.196096 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:08:42.196187 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 01:08:42.196219 kernel: iommu: Default domain type: Translated Jan 23 01:08:42.196239 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:08:42.196259 kernel: efivars: Registered efivars operations Jan 23 01:08:42.196278 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:08:42.196297 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:08:42.196316 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 23 01:08:42.196335 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 23 01:08:42.196465 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Jan 23 01:08:42.196497 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 23 01:08:42.197570 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 23 01:08:42.197601 kernel: vgaarb: loaded Jan 23 01:08:42.197622 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:08:42.197641 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:08:42.197660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:08:42.197691 kernel: pnp: PnP ACPI init Jan 23 01:08:42.197709 kernel: pnp: PnP ACPI: found 7 devices Jan 23 01:08:42.197729 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:08:42.197748 kernel: NET: Registered PF_INET protocol family Jan 23 01:08:42.197774 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:08:42.197793 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 01:08:42.197812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:08:42.197831 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:08:42.197850 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 01:08:42.197869 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 01:08:42.197888 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 01:08:42.197907 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 01:08:42.197926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:08:42.197949 kernel: NET: Registered PF_XDP protocol family Jan 23 01:08:42.198158 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:08:42.198334 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:08:42.198512 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:08:42.199758 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 23 01:08:42.199988 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 01:08:42.200017 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:08:42.200043 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:08:42.200063 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 23 01:08:42.200088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:08:42.200108 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 23 01:08:42.200127 kernel: clocksource: Switched to clocksource tsc Jan 23 01:08:42.200147 kernel: Initialise system trusted keyrings Jan 23 01:08:42.200166 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 01:08:42.200185 kernel: Key type asymmetric registered Jan 23 01:08:42.200204 kernel: Asymmetric key parser 'x509' registered Jan 23 01:08:42.200228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:08:42.200247 kernel: io scheduler mq-deadline registered Jan 23 01:08:42.200266 kernel: io scheduler kyber registered Jan 23 01:08:42.200285 kernel: io scheduler bfq registered Jan 23 01:08:42.200304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:08:42.200323 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 23 01:08:42.201579 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 23 01:08:42.201624 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 23 01:08:42.201870 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 23 01:08:42.201905 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 23 01:08:42.202101 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 23 01:08:42.202126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:08:42.202146 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:08:42.202263 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 01:08:42.202289 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 23 01:08:42.202306 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 23 01:08:42.203586 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 23 01:08:42.203632 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:08:42.203652 kernel: i8042: Warning: Keylock active Jan 23 01:08:42.203671 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:08:42.203689 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:08:42.203908 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 01:08:42.204094 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 01:08:42.204279 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T01:08:41 UTC (1769130521) Jan 23 01:08:42.204461 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 01:08:42.204489 kernel: intel_pstate: CPU model not supported Jan 23 01:08:42.204507 kernel: pstore: Using crash dump compression: deflate Jan 23 01:08:42.207390 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:08:42.207441 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:08:42.207465 kernel: Segment Routing with IPv6 Jan 23 01:08:42.207482 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:08:42.207504 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:08:42.207523 kernel: Key type dns_resolver registered Jan 23 01:08:42.207570 kernel: IPI shorthand broadcast: enabled Jan 23 01:08:42.207597 kernel: sched_clock: Marking stable (4005005616, 1004857231)->(5366124330, -356261483) Jan 23 01:08:42.207614 kernel: registered taskstats version 1 Jan 23 01:08:42.207632 kernel: Loading compiled-in X.509 certificates Jan 23 01:08:42.207650 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:08:42.207668 kernel: Demotion targets for Node 0: null Jan 23 01:08:42.207693 kernel: Key type .fscrypt registered Jan 23 01:08:42.207709 kernel: Key type fscrypt-provisioning registered Jan 23 01:08:42.207729 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:08:42.207751 kernel: ima: No architecture policies found Jan 23 01:08:42.207779 kernel: clk: Disabling unused clocks Jan 23 01:08:42.207799 kernel: Warning: unable to open an initial console. Jan 23 01:08:42.207818 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:08:42.207838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:08:42.207859 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:08:42.207879 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:08:42.207898 kernel: Run /init as init process Jan 23 01:08:42.207917 kernel: with arguments: Jan 23 01:08:42.207937 kernel: /init Jan 23 01:08:42.207963 kernel: with environment: Jan 23 01:08:42.207996 kernel: HOME=/ Jan 23 01:08:42.208016 kernel: TERM=linux Jan 23 01:08:42.208038 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:08:42.208063 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:08:42.208085 systemd[1]: Detected virtualization google. Jan 23 01:08:42.208106 systemd[1]: Detected architecture x86-64. Jan 23 01:08:42.208131 systemd[1]: Running in initrd. Jan 23 01:08:42.208150 systemd[1]: No hostname configured, using default hostname. Jan 23 01:08:42.208170 systemd[1]: Hostname set to . Jan 23 01:08:42.208189 systemd[1]: Initializing machine ID from random generator. Jan 23 01:08:42.208287 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:08:42.208323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:08:42.208370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:08:42.208394 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:08:42.208412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:08:42.208431 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:08:42.208460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:08:42.208481 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:08:42.208505 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:08:42.208542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:08:42.208585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:08:42.208603 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:08:42.208621 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:08:42.208640 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:08:42.208658 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:08:42.208677 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:08:42.208695 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:08:42.208720 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:08:42.208738 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:08:42.208757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:08:42.208775 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:08:42.208794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:08:42.208813 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:08:42.208831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:08:42.208850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:08:42.208870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:08:42.208898 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:08:42.208918 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:08:42.208937 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:08:42.208957 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:08:42.208977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:08:42.208997 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:08:42.209022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:08:42.209042 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:08:42.209063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:08:42.209144 systemd-journald[192]: Collecting audit messages is disabled. Jan 23 01:08:42.209194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:08:42.209215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:08:42.209236 systemd-journald[192]: Journal started Jan 23 01:08:42.209280 systemd-journald[192]: Runtime Journal (/run/log/journal/1b3c219db84b4058bbd8eb466eccb5c0) is 8M, max 148.6M, 140.6M free. Jan 23 01:08:42.169923 systemd-modules-load[193]: Inserted module 'overlay' Jan 23 01:08:42.216559 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:08:42.220651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:08:42.227122 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:08:42.230554 kernel: Bridge firewalling registered Jan 23 01:08:42.230646 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 23 01:08:42.232756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:08:42.237733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:08:42.239495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:08:42.247224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:08:42.261131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:08:42.269787 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:08:42.278104 systemd-tmpfiles[212]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:08:42.289315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:08:42.291227 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:08:42.299099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:08:42.306890 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:08:42.315001 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:08:42.378318 systemd-resolved[238]: Positive Trust Anchors: Jan 23 01:08:42.378918 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:08:42.379130 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:08:42.386413 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 23 01:08:42.391208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:08:42.402818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:08:42.445573 kernel: SCSI subsystem initialized Jan 23 01:08:42.459554 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:08:42.470576 kernel: iscsi: registered transport (tcp) Jan 23 01:08:42.495645 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:08:42.495740 kernel: QLogic iSCSI HBA Driver Jan 23 01:08:42.519632 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:08:42.543185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:08:42.551920 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:08:42.614702 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:08:42.617915 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:08:42.684597 kernel: raid6: avx2x4 gen() 17745 MB/s Jan 23 01:08:42.702576 kernel: raid6: avx2x2 gen() 17312 MB/s Jan 23 01:08:42.721056 kernel: raid6: avx2x1 gen() 12928 MB/s Jan 23 01:08:42.721168 kernel: raid6: using algorithm avx2x4 gen() 17745 MB/s Jan 23 01:08:42.740581 kernel: raid6: .... xor() 7174 MB/s, rmw enabled Jan 23 01:08:42.740679 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:08:42.764613 kernel: xor: automatically using best checksumming function avx Jan 23 01:08:42.955574 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:08:42.965308 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:08:42.974949 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:08:43.007869 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 23 01:08:43.017417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:08:43.022264 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:08:43.059839 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 23 01:08:43.097900 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:08:43.106506 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:08:43.208114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:08:43.217429 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:08:43.351568 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:08:43.358563 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jan 23 01:08:43.389562 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 01:08:43.400175 kernel: scsi host0: Virtio SCSI HBA Jan 23 01:08:43.400291 kernel: blk-mq: reduced tag depth to 10240 Jan 23 01:08:43.408580 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 23 01:08:43.421567 kernel: AES CTR mode by8 optimization enabled Jan 23 01:08:43.421908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:08:43.429176 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:08:43.447670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:08:43.452017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:08:43.461916 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:08:43.499679 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 23 01:08:43.500041 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 23 01:08:43.500290 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 23 01:08:43.500485 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 23 01:08:43.500786 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 01:08:43.520822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:08:43.520905 kernel: GPT:17805311 != 33554431 Jan 23 01:08:43.520931 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:08:43.522230 kernel: GPT:17805311 != 33554431 Jan 23 01:08:43.522274 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:08:43.523650 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:08:43.524652 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 23 01:08:43.530216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:08:43.636213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 23 01:08:43.637121 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:08:43.655855 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 23 01:08:43.680222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 23 01:08:43.692811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 23 01:08:43.700732 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 23 01:08:43.701117 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:08:43.717770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:08:43.726762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:08:43.728601 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:08:43.749759 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:08:43.766880 disk-uuid[594]: Primary Header is updated. Jan 23 01:08:43.766880 disk-uuid[594]: Secondary Entries is updated. Jan 23 01:08:43.766880 disk-uuid[594]: Secondary Header is updated. Jan 23 01:08:43.784788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:08:43.792059 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:08:43.812566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:08:44.829570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:08:44.829655 disk-uuid[595]: The operation has completed successfully. Jan 23 01:08:44.913202 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:08:44.913373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:08:44.967599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:08:44.987145 sh[616]: Success Jan 23 01:08:45.010931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:08:45.011046 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:08:45.011079 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:08:45.024595 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 01:08:45.107265 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:08:45.112653 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:08:45.131791 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:08:45.154564 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (628) Jan 23 01:08:45.158328 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:08:45.158413 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:08:45.190032 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:08:45.190141 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:08:45.190182 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:08:45.195046 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:08:45.196263 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:08:45.198946 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:08:45.201378 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:08:45.211420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:08:45.267582 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (663) Jan 23 01:08:45.271578 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:08:45.271660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:08:45.279310 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:08:45.279394 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:08:45.279421 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:08:45.286597 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:08:45.288758 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:08:45.295808 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:08:45.390089 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:08:45.420494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:08:45.524315 systemd-networkd[797]: lo: Link UP Jan 23 01:08:45.524330 systemd-networkd[797]: lo: Gained carrier Jan 23 01:08:45.533354 systemd-networkd[797]: Enumeration completed Jan 23 01:08:45.533919 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:08:45.534308 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:08:45.534315 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:08:45.536193 systemd-networkd[797]: eth0: Link UP Jan 23 01:08:45.536509 systemd-networkd[797]: eth0: Gained carrier Jan 23 01:08:45.536545 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:08:45.547806 systemd[1]: Reached target network.target - Network. Jan 23 01:08:45.559966 ignition[736]: Ignition 2.22.0 Jan 23 01:08:45.547864 systemd-networkd[797]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba.c.flatcar-212911.internal' to 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:08:45.559974 ignition[736]: Stage: fetch-offline Jan 23 01:08:45.548022 systemd-networkd[797]: eth0: DHCPv4 address 10.128.0.88/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 23 01:08:45.560132 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:45.564051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:08:45.560144 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:45.570196 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:08:45.560519 ignition[736]: parsed url from cmdline: "" Jan 23 01:08:45.560545 ignition[736]: no config URL provided Jan 23 01:08:45.560557 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:08:45.560572 ignition[736]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:08:45.560583 ignition[736]: failed to fetch config: resource requires networking Jan 23 01:08:45.561253 ignition[736]: Ignition finished successfully Jan 23 01:08:45.620344 ignition[807]: Ignition 2.22.0 Jan 23 01:08:45.620362 ignition[807]: Stage: fetch Jan 23 01:08:45.620591 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:45.620609 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:45.620755 ignition[807]: parsed url from cmdline: "" Jan 23 01:08:45.620762 ignition[807]: no config URL provided Jan 23 01:08:45.620772 ignition[807]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:08:45.620786 ignition[807]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:08:45.635044 unknown[807]: fetched base config from "system" Jan 23 01:08:45.620842 ignition[807]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 23 01:08:45.635056 unknown[807]: fetched base config from "system" Jan 23 01:08:45.625228 ignition[807]: GET result: OK Jan 23 01:08:45.635066 unknown[807]: fetched user config from "gcp" Jan 23 01:08:45.625487 ignition[807]: parsing config with SHA512: 17e28cab3992759ee10d3acada769920c43a1016fd3488e22c9eb1f1163fe2f6de42d0fca9bb2df11987364a1037f2375a2f5ebad5a2cc1b1c19cee6bf4b7797 Jan 23 01:08:45.638963 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:08:45.635491 ignition[807]: fetch: fetch complete Jan 23 01:08:45.644807 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:08:45.635497 ignition[807]: fetch: fetch passed Jan 23 01:08:45.635582 ignition[807]: Ignition finished successfully Jan 23 01:08:45.691588 ignition[813]: Ignition 2.22.0 Jan 23 01:08:45.691606 ignition[813]: Stage: kargs Jan 23 01:08:45.695003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:08:45.691836 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:45.697030 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:08:45.691855 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:45.692986 ignition[813]: kargs: kargs passed Jan 23 01:08:45.693045 ignition[813]: Ignition finished successfully Jan 23 01:08:45.745224 ignition[820]: Ignition 2.22.0 Jan 23 01:08:45.745246 ignition[820]: Stage: disks Jan 23 01:08:45.745509 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:45.750098 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:08:45.745560 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:45.758098 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:08:45.747649 ignition[820]: disks: disks passed Jan 23 01:08:45.763723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:08:45.747867 ignition[820]: Ignition finished successfully Jan 23 01:08:45.771768 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:08:45.780766 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:08:45.786755 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:08:45.788602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:08:45.838476 systemd-fsck[829]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 01:08:45.851903 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:08:45.857875 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:08:46.028558 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:08:46.028512 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:08:46.032397 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:08:46.036912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:08:46.050632 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:08:46.058285 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:08:46.058378 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:08:46.058424 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:08:46.077698 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (837) Jan 23 01:08:46.077741 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:08:46.077767 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:08:46.069789 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:08:46.084698 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:08:46.084737 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:08:46.084762 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:08:46.075352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:08:46.085464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:08:46.243428 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:08:46.253340 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:08:46.261515 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:08:46.268973 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:08:46.429124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:08:46.432280 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:08:46.445961 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:08:46.459229 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:08:46.462967 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:08:46.507376 ignition[949]: INFO : Ignition 2.22.0 Jan 23 01:08:46.507376 ignition[949]: INFO : Stage: mount Jan 23 01:08:46.512680 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:46.512680 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:46.512680 ignition[949]: INFO : mount: mount passed Jan 23 01:08:46.512680 ignition[949]: INFO : Ignition finished successfully Jan 23 01:08:46.512477 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:08:46.521483 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:08:46.526941 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:08:46.554013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:08:46.583750 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (961) Jan 23 01:08:46.586375 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:08:46.586434 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:08:46.592224 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:08:46.592299 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:08:46.592325 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:08:46.595837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:08:46.634597 ignition[978]: INFO : Ignition 2.22.0 Jan 23 01:08:46.634597 ignition[978]: INFO : Stage: files Jan 23 01:08:46.639748 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:46.639748 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:46.639748 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:08:46.639748 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:08:46.639748 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:08:46.652657 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:08:46.652657 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:08:46.652657 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:08:46.652657 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:08:46.652657 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 01:08:46.645446 unknown[978]: wrote ssh authorized keys file for user: core Jan 23 01:08:46.887974 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:08:46.918772 systemd-networkd[797]: eth0: Gained IPv6LL Jan 23 01:08:47.994686 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:08:48.000736 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:08:48.038730 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:08:48.038730 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:08:48.038730 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:08:48.038730 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:08:48.038730 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:08:48.430920 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:08:49.201085 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:08:49.201085 ignition[978]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:08:49.210223 ignition[978]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:08:49.210223 ignition[978]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:08:49.210223 ignition[978]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:08:49.210223 ignition[978]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:08:49.210223 ignition[978]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:08:49.210223 ignition[978]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:08:49.210223 ignition[978]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:08:49.210223 ignition[978]: INFO : files: files passed Jan 23 01:08:49.210223 ignition[978]: INFO : Ignition finished successfully Jan 23 01:08:49.209965 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:08:49.215272 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:08:49.228821 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:08:49.238777 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:08:49.238952 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:08:49.272104 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:08:49.274779 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:08:49.274779 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:08:49.275445 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:08:49.282397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:08:49.290390 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:08:49.353410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:08:49.353577 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:08:49.359088 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:08:49.359398 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:08:49.363899 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:08:49.365874 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:08:49.404967 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:08:49.407953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:08:49.438152 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:08:49.444905 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:08:49.445359 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:08:49.450424 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:08:49.451032 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:08:49.458131 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:08:49.461127 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:08:49.465265 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:08:49.469089 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:08:49.473101 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:08:49.477168 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:08:49.481121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:08:49.485120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:08:49.489252 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:08:49.494146 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:08:49.498136 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:08:49.502104 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:08:49.502575 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:08:49.512702 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:08:49.513184 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:08:49.519045 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:08:49.519335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:08:49.528118 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:08:49.528486 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:08:49.547812 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:08:49.548579 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:08:49.553238 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:08:49.553655 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:08:49.565759 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:08:49.575829 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:08:49.577774 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:08:49.578671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:08:49.593270 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:08:49.593978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:08:49.615048 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:08:49.615964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:08:49.625322 ignition[1031]: INFO : Ignition 2.22.0 Jan 23 01:08:49.625322 ignition[1031]: INFO : Stage: umount Jan 23 01:08:49.636708 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:08:49.636708 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 01:08:49.636708 ignition[1031]: INFO : umount: umount passed Jan 23 01:08:49.636708 ignition[1031]: INFO : Ignition finished successfully Jan 23 01:08:49.630601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:08:49.631702 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:08:49.631882 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:08:49.636691 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:08:49.636862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:08:49.643006 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:08:49.643129 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:08:49.643941 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:08:49.644019 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:08:49.650741 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:08:49.650832 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:08:49.656907 systemd[1]: Stopped target network.target - Network. Jan 23 01:08:49.663726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:08:49.663849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:08:49.670761 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:08:49.674664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:08:49.674763 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:08:49.678675 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:08:49.682648 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:08:49.686780 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:08:49.686890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:08:49.692756 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:08:49.692859 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:08:49.698757 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:08:49.698891 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:08:49.704752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:08:49.704857 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:08:49.708086 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:08:49.708211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:08:49.712788 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:08:49.716144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:08:49.724006 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:08:49.725321 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:08:49.731199 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:08:49.731597 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:08:49.731785 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:08:49.740393 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:08:49.741239 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:08:49.743020 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:08:49.743122 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:08:49.748617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:08:49.756696 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:08:49.756920 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:08:49.759981 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:08:49.760067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:08:49.766943 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:08:49.767034 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:08:49.772753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:08:49.772848 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:08:49.778947 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:08:49.788110 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:08:49.788201 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:08:49.797448 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:08:49.797954 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:08:49.805220 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:08:49.805299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:08:49.809833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:08:49.809906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:08:49.812887 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:08:49.813074 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:08:49.821870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:08:49.822130 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:08:49.828952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:08:49.829040 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:08:49.841632 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:08:49.849683 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:08:49.849914 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:08:49.856235 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:08:49.856336 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:08:49.868998 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:08:49.869236 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:08:49.872101 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:08:49.872310 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:08:49.876893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:08:49.876958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:08:49.884955 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:08:49.885024 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:08:49.885065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:08:49.885113 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:08:49.885664 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:08:49.982724 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 23 01:08:49.885785 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:08:49.887344 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:08:49.887759 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:08:49.893891 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:08:49.901166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:08:49.935908 systemd[1]: Switching root. Jan 23 01:08:49.998655 systemd-journald[192]: Journal stopped Jan 23 01:08:52.038867 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:08:52.038938 kernel: SELinux: policy capability open_perms=1 Jan 23 01:08:52.038968 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:08:52.038986 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:08:52.039003 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:08:52.039022 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:08:52.039045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:08:52.039065 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:08:52.039088 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:08:52.039109 kernel: audit: type=1403 audit(1769130530.581:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:08:52.039133 systemd[1]: Successfully loaded SELinux policy in 75.415ms. Jan 23 01:08:52.039157 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.065ms. Jan 23 01:08:52.039181 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:08:52.039202 systemd[1]: Detected virtualization google. Jan 23 01:08:52.039227 systemd[1]: Detected architecture x86-64. Jan 23 01:08:52.039269 systemd[1]: Detected first boot. Jan 23 01:08:52.039290 systemd[1]: Initializing machine ID from random generator. Jan 23 01:08:52.039312 zram_generator::config[1074]: No configuration found. Jan 23 01:08:52.039335 kernel: Guest personality initialized and is inactive Jan 23 01:08:52.039355 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:08:52.039379 kernel: Initialized host personality Jan 23 01:08:52.039406 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:08:52.039427 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:08:52.039450 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:08:52.039471 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:08:52.039491 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:08:52.039512 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:08:52.042505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:08:52.042570 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:08:52.042597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:08:52.042622 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:08:52.042645 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:08:52.042669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:08:52.042693 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:08:52.042722 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:08:52.042746 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:08:52.042770 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:08:52.042792 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:08:52.042816 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:08:52.042841 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:08:52.042872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:08:52.042896 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:08:52.042921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:08:52.042948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:08:52.042971 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:08:52.042994 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:08:52.043018 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:08:52.043042 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:08:52.043066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:08:52.043092 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:08:52.043120 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:08:52.043145 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:08:52.043168 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:08:52.043192 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:08:52.043216 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:08:52.043242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:08:52.043271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:08:52.043296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:08:52.043320 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:08:52.043345 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:08:52.043371 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:08:52.043406 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:08:52.043432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:52.043462 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:08:52.043486 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:08:52.043509 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:08:52.043558 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:08:52.043580 systemd[1]: Reached target machines.target - Containers. Jan 23 01:08:52.043600 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:08:52.043620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:08:52.043642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:08:52.043667 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:08:52.043687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:08:52.043708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:08:52.043728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:08:52.043749 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:08:52.043773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:08:52.043796 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:08:52.043819 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:08:52.043844 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:08:52.043873 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:08:52.043898 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:08:52.043921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:08:52.043941 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:08:52.043963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:08:52.043988 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:08:52.044010 kernel: fuse: init (API version 7.41) Jan 23 01:08:52.044030 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:08:52.044055 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:08:52.044077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:08:52.044100 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:08:52.044120 systemd[1]: Stopped verity-setup.service. Jan 23 01:08:52.044143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:52.044163 kernel: loop: module loaded Jan 23 01:08:52.044183 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:08:52.044205 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:08:52.044226 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:08:52.044254 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:08:52.044276 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:08:52.044298 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:08:52.044321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:08:52.044343 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:08:52.044366 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:08:52.044390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:08:52.044423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:08:52.044449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:08:52.044471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:08:52.044495 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:08:52.044518 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:08:52.046402 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:08:52.046434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:08:52.046457 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:08:52.047997 systemd-journald[1146]: Collecting audit messages is disabled. Jan 23 01:08:52.048085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:08:52.048113 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:08:52.048138 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:08:52.048161 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:08:52.048193 systemd-journald[1146]: Journal started Jan 23 01:08:52.048247 systemd-journald[1146]: Runtime Journal (/run/log/journal/ff2d8e4efa404f79b2e751c22a6d1359) is 8M, max 148.6M, 140.6M free. Jan 23 01:08:51.514286 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:08:51.532480 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 01:08:51.533091 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:08:52.072079 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:08:52.072152 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:08:52.072193 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:08:52.081844 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:08:52.083563 kernel: ACPI: bus type drm_connector registered Jan 23 01:08:52.096262 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:08:52.103563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:08:52.107715 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:08:52.114925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:08:52.120562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:08:52.134614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:08:52.134733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:08:52.146571 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:08:52.163567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:08:52.174576 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:08:52.184428 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:08:52.185693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:08:52.190469 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:08:52.195722 kernel: loop0: detected capacity change from 0 to 50736 Jan 23 01:08:52.194987 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:08:52.199113 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:08:52.218053 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:08:52.231909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:08:52.304668 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:08:52.313752 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:08:52.318818 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:08:52.326556 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 23 01:08:52.326590 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 23 01:08:52.346707 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:08:52.353233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:08:52.360785 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:08:52.366520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:08:52.412905 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:08:52.426145 systemd-journald[1146]: Time spent on flushing to /var/log/journal/ff2d8e4efa404f79b2e751c22a6d1359 is 79.776ms for 975 entries. Jan 23 01:08:52.426145 systemd-journald[1146]: System Journal (/var/log/journal/ff2d8e4efa404f79b2e751c22a6d1359) is 8M, max 584.8M, 576.8M free. Jan 23 01:08:52.530581 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 01:08:52.530630 systemd-journald[1146]: Received client request to flush runtime journal. Jan 23 01:08:52.530692 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 01:08:52.446599 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:08:52.514955 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:08:52.522767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:08:52.537414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:08:52.542763 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:08:52.573330 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 23 01:08:52.573832 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 23 01:08:52.582908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:08:52.587616 kernel: loop3: detected capacity change from 0 to 224512 Jan 23 01:08:52.704594 kernel: loop4: detected capacity change from 0 to 50736 Jan 23 01:08:52.753642 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 01:08:52.814571 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 01:08:52.853582 kernel: loop7: detected capacity change from 0 to 224512 Jan 23 01:08:52.909729 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 23 01:08:52.910778 (sd-merge)[1226]: Merged extensions into '/usr'. Jan 23 01:08:52.929806 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:08:52.929830 systemd[1]: Reloading... Jan 23 01:08:53.103401 zram_generator::config[1248]: No configuration found. Jan 23 01:08:53.402899 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:08:53.578230 systemd[1]: Reloading finished in 647 ms. Jan 23 01:08:53.596889 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:08:53.601360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:08:53.614035 systemd[1]: Starting ensure-sysext.service... Jan 23 01:08:53.619786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:08:53.668162 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:08:53.668191 systemd[1]: Reloading... Jan 23 01:08:53.670763 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:08:53.671510 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:08:53.672043 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:08:53.672621 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:08:53.674345 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:08:53.674932 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 23 01:08:53.675054 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 23 01:08:53.682950 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:08:53.682974 systemd-tmpfiles[1293]: Skipping /boot Jan 23 01:08:53.698498 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:08:53.698540 systemd-tmpfiles[1293]: Skipping /boot Jan 23 01:08:53.745591 zram_generator::config[1317]: No configuration found. Jan 23 01:08:54.010588 systemd[1]: Reloading finished in 341 ms. Jan 23 01:08:54.034120 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:08:54.063336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:08:54.083358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:08:54.096942 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:08:54.114687 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:08:54.128996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:08:54.144205 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:08:54.157944 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:08:54.176433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:54.177310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:08:54.184673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:08:54.197332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:08:54.212345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:08:54.215836 augenrules[1389]: No rules Jan 23 01:08:54.221868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:08:54.222134 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:08:54.229003 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:08:54.239674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:54.243161 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:08:54.246681 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:08:54.256948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:08:54.258629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:08:54.272202 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:08:54.284604 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Jan 23 01:08:54.285595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:08:54.285930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:08:54.303095 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:08:54.303546 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:08:54.334637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:54.335065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:08:54.340684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:08:54.355710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:08:54.370003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:08:54.378807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:08:54.379062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:08:54.384654 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:08:54.393693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:54.403291 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:08:54.414113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:08:54.427801 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:08:54.439623 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:08:54.450994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:08:54.451357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:08:54.462488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:08:54.462874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:08:54.475377 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:08:54.475761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:08:54.487509 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:08:54.547646 systemd[1]: Finished ensure-sysext.service. Jan 23 01:08:54.549068 systemd-resolved[1372]: Positive Trust Anchors: Jan 23 01:08:54.549082 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:08:54.549161 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:08:54.567292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:54.569222 systemd-resolved[1372]: Defaulting to hostname 'linux'. Jan 23 01:08:54.572870 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:08:54.580967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:08:54.583847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:08:54.598947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:08:54.610006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:08:54.626901 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:08:54.637808 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 01:08:54.645817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:08:54.646208 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:08:54.652547 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:08:54.661734 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:08:54.667421 augenrules[1443]: /sbin/augenrules: No change Jan 23 01:08:54.670736 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:08:54.671092 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:08:54.671803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:08:54.681598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:08:54.682936 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:08:54.694310 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:08:54.695487 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:08:54.700178 augenrules[1468]: No rules Jan 23 01:08:54.705245 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:08:54.706627 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:08:54.716155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:08:54.716505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:08:54.727206 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:08:54.727548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:08:54.757000 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jan 23 01:08:54.759325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:08:54.769692 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jan 23 01:08:54.778733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:08:54.778864 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:08:54.788129 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:08:54.797794 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:08:54.807695 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:08:54.817948 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:08:54.826850 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:08:54.837762 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:08:54.847731 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:08:54.847791 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:08:54.855707 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:08:54.865357 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:08:54.878552 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:08:54.894558 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:08:54.905959 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:08:54.916713 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:08:54.927007 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:08:54.936379 systemd-networkd[1460]: lo: Link UP Jan 23 01:08:54.936393 systemd-networkd[1460]: lo: Gained carrier Jan 23 01:08:54.938434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:08:54.943191 systemd-networkd[1460]: Enumeration completed Jan 23 01:08:54.943617 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 01:08:54.944157 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:08:54.944184 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:08:54.945837 systemd-networkd[1460]: eth0: Link UP Jan 23 01:08:54.946809 systemd-networkd[1460]: eth0: Gained carrier Jan 23 01:08:54.947651 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:08:54.950870 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:08:54.958628 systemd-networkd[1460]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba.c.flatcar-212911.internal' to 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:08:54.958658 systemd-networkd[1460]: eth0: DHCPv4 address 10.128.0.88/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 23 01:08:54.959915 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:08:54.974369 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:08:54.996579 systemd[1]: Reached target network.target - Network. Jan 23 01:08:55.005594 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:08:55.012854 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 23 01:08:55.029704 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:08:55.043181 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:08:55.071732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 23 01:08:55.088591 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 01:08:55.100573 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 01:08:55.125982 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:08:55.125716 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 23 01:08:55.137346 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:08:55.154351 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 23 01:08:55.154443 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 01:08:55.191695 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:08:55.194573 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:08:55.203735 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:08:55.212727 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:08:55.220865 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:08:55.220924 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:08:55.224075 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:08:55.239350 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:08:55.250479 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:08:55.266582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:08:55.282681 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:08:55.293522 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:08:55.304065 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:08:55.316850 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:08:55.329044 jq[1527]: false Jan 23 01:08:55.329979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:08:55.344914 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:08:55.364481 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:08:55.381241 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:08:55.396683 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:08:55.412416 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jan 23 01:08:55.412435 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jan 23 01:08:55.413898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:08:55.430117 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:08:55.441771 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 23 01:08:55.443719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:08:55.450823 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:08:55.467850 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:08:55.466843 oslogin_cache_refresh[1529]: Failure getting users, quitting Jan 23 01:08:55.471164 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Jan 23 01:08:55.471164 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:08:55.471164 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Jan 23 01:08:55.466876 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:08:55.466951 oslogin_cache_refresh[1529]: Refreshing group entry cache Jan 23 01:08:55.487463 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Jan 23 01:08:55.487463 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:08:55.478839 oslogin_cache_refresh[1529]: Failure getting groups, quitting Jan 23 01:08:55.478860 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:08:55.489938 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:08:55.501218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:08:55.501607 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:08:55.502154 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:08:55.502491 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:08:55.513568 extend-filesystems[1528]: Found /dev/sda6 Jan 23 01:08:55.518942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:08:55.521159 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:08:55.558025 coreos-metadata[1524]: Jan 23 01:08:55.552 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 23 01:08:55.558025 coreos-metadata[1524]: Jan 23 01:08:55.557 INFO Fetch successful Jan 23 01:08:55.558025 coreos-metadata[1524]: Jan 23 01:08:55.557 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 23 01:08:55.564475 coreos-metadata[1524]: Jan 23 01:08:55.560 INFO Fetch successful Jan 23 01:08:55.564475 coreos-metadata[1524]: Jan 23 01:08:55.560 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 23 01:08:55.570282 update_engine[1544]: I20260123 01:08:55.565496 1544 main.cc:92] Flatcar Update Engine starting Jan 23 01:08:55.570794 extend-filesystems[1528]: Found /dev/sda9 Jan 23 01:08:55.578572 coreos-metadata[1524]: Jan 23 01:08:55.572 INFO Fetch successful Jan 23 01:08:55.578572 coreos-metadata[1524]: Jan 23 01:08:55.572 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 23 01:08:55.578572 coreos-metadata[1524]: Jan 23 01:08:55.574 INFO Fetch successful Jan 23 01:08:55.581135 jq[1545]: true Jan 23 01:08:55.583157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:08:55.599562 extend-filesystems[1528]: Checking size of /dev/sda9 Jan 23 01:08:55.631758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:08:55.632349 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:08:55.651085 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:08:55.651866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:08:55.665659 jq[1565]: true Jan 23 01:08:55.709701 extend-filesystems[1528]: Resized partition /dev/sda9 Jan 23 01:08:55.724567 extend-filesystems[1583]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:08:55.747341 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:08:55.770583 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 23 01:08:55.766554 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:08:55.797239 tar[1553]: linux-amd64/LICENSE Jan 23 01:08:55.797239 tar[1553]: linux-amd64/helm Jan 23 01:08:55.998559 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 23 01:08:56.033135 dbus-daemon[1525]: [system] SELinux support is enabled Jan 23 01:08:56.010694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:08:56.033451 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:08:56.054301 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:08:56.054346 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:08:56.061103 extend-filesystems[1583]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 01:08:56.061103 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 01:08:56.061103 extend-filesystems[1583]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 23 01:08:56.059388 dbus-daemon[1525]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1460 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:08:56.067764 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:08:56.139073 extend-filesystems[1528]: Resized filesystem in /dev/sda9 Jan 23 01:08:56.167746 update_engine[1544]: I20260123 01:08:56.066038 1544 update_check_scheduler.cc:74] Next update check in 9m39s Jan 23 01:08:56.167817 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:08:56.135421 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 01:08:56.067801 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:08:56.080325 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:08:56.081828 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:08:56.103201 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:08:56.133035 systemd[1]: Starting sshkeys.service... Jan 23 01:08:56.145917 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:08:56.164186 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:08:56.193019 ntpd[1533]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: ---------------------------------------------------- Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: corporation. Support and training for ntp-4 are Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: available at https://www.nwtime.org/support Jan 23 01:08:56.195133 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: ---------------------------------------------------- Jan 23 01:08:56.193125 ntpd[1533]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:08:56.193141 ntpd[1533]: ---------------------------------------------------- Jan 23 01:08:56.193155 ntpd[1533]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:08:56.193168 ntpd[1533]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:08:56.193182 ntpd[1533]: corporation. Support and training for ntp-4 are Jan 23 01:08:56.193195 ntpd[1533]: available at https://www.nwtime.org/support Jan 23 01:08:56.193208 ntpd[1533]: ---------------------------------------------------- Jan 23 01:08:56.209380 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: proto: precision = 0.105 usec (-23) Jan 23 01:08:56.209380 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: basedate set to 2026-01-10 Jan 23 01:08:56.209380 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: gps base set to 2026-01-11 (week 2401) Jan 23 01:08:56.209380 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:08:56.209380 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:08:56.203267 ntpd[1533]: proto: precision = 0.105 usec (-23) Jan 23 01:08:56.208815 ntpd[1533]: basedate set to 2026-01-10 Jan 23 01:08:56.208843 ntpd[1533]: gps base set to 2026-01-11 (week 2401) Jan 23 01:08:56.209021 ntpd[1533]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:08:56.209063 ntpd[1533]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:08:56.250120 kernel: ntpd[1533]: segfault at 24 ip 000055a869bc1aeb sp 00007ffe75e1e280 error 4 in ntpd[68aeb,55a869b5f000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 01:08:56.250226 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:08:56.216807 ntpd[1533]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:08:56.250350 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:08:56.250350 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Listen normally on 3 eth0 10.128.0.88:123 Jan 23 01:08:56.250350 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: Listen normally on 4 lo [::1]:123 Jan 23 01:08:56.250350 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: bind(21) AF_INET6 [fe80::4001:aff:fe80:58%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:08:56.250350 ntpd[1533]: 23 Jan 01:08:56 ntpd[1533]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:58%2]:123 Jan 23 01:08:56.216868 ntpd[1533]: Listen normally on 3 eth0 10.128.0.88:123 Jan 23 01:08:56.216917 ntpd[1533]: Listen normally on 4 lo [::1]:123 Jan 23 01:08:56.216963 ntpd[1533]: bind(21) AF_INET6 [fe80::4001:aff:fe80:58%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:08:56.216994 ntpd[1533]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:58%2]:123 Jan 23 01:08:56.278687 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:08:56.304355 systemd-coredump[1619]: Process 1533 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:08:56.307995 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:08:56.322596 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:08:56.349601 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 01:08:56.376703 systemd[1]: Started systemd-coredump@0-1619-0.service - Process Core Dump (PID 1619/UID 0). Jan 23 01:08:56.518714 systemd-networkd[1460]: eth0: Gained IPv6LL Jan 23 01:08:56.528405 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:08:56.542312 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:08:56.557682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:56.571997 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:08:56.586604 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 23 01:08:56.616825 init.sh[1630]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 23 01:08:56.616825 init.sh[1630]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 23 01:08:56.616825 init.sh[1630]: + /usr/bin/google_instance_setup Jan 23 01:08:56.655727 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:08:56.709343 coreos-metadata[1620]: Jan 23 01:08:56.708 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 23 01:08:56.712985 coreos-metadata[1620]: Jan 23 01:08:56.711 INFO Fetch failed with 404: resource not found Jan 23 01:08:56.712985 coreos-metadata[1620]: Jan 23 01:08:56.711 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 23 01:08:56.721698 coreos-metadata[1620]: Jan 23 01:08:56.713 INFO Fetch successful Jan 23 01:08:56.721698 coreos-metadata[1620]: Jan 23 01:08:56.720 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 23 01:08:56.721698 coreos-metadata[1620]: Jan 23 01:08:56.720 INFO Fetch failed with 404: resource not found Jan 23 01:08:56.721698 coreos-metadata[1620]: Jan 23 01:08:56.720 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 23 01:08:56.721698 coreos-metadata[1620]: Jan 23 01:08:56.721 INFO Fetch failed with 404: resource not found Jan 23 01:08:56.721698 coreos-metadata[1620]: Jan 23 01:08:56.721 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 23 01:08:56.723333 coreos-metadata[1620]: Jan 23 01:08:56.722 INFO Fetch successful Jan 23 01:08:56.732655 unknown[1620]: wrote ssh authorized keys file for user: core Jan 23 01:08:56.800205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:08:56.830951 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:08:56.834468 systemd-logind[1542]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 01:08:56.837192 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:08:56.841401 systemd-logind[1542]: New seat seat0. Jan 23 01:08:56.851958 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:08:56.865446 update-ssh-keys[1642]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:08:56.865220 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:08:56.883317 containerd[1569]: time="2026-01-23T01:08:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:08:56.881627 systemd[1]: Finished sshkeys.service. Jan 23 01:08:56.894584 containerd[1569]: time="2026-01-23T01:08:56.889459056Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:08:56.988398 containerd[1569]: time="2026-01-23T01:08:56.988335876Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.732µs" Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.990570849Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.990641216Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.990855034Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.990880462Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.990916836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.990990375Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.991008912Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.991353649Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.991383344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.991404569Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:08:56.991559 containerd[1569]: time="2026-01-23T01:08:56.991419202Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.997232070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.997690440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.997773663Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.997793588Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.997929270Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.998305698Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:08:56.998606 containerd[1569]: time="2026-01-23T01:08:56.998410788Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014189803Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014292040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014318294Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014338735Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014363256Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014384356Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014405507Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014424558Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014443165Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014460213Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014476043Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014497533Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014712402Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:08:57.021569 containerd[1569]: time="2026-01-23T01:08:57.014776718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014805032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014834453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014854537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014872125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014891431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014910507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014935561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014954107Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.014973014Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.015055109Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.015086842Z" level=info msg="Start snapshots syncer" Jan 23 01:08:57.022288 containerd[1569]: time="2026-01-23T01:08:57.015118367Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:08:57.025222 containerd[1569]: time="2026-01-23T01:08:57.024365395Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:08:57.025222 containerd[1569]: time="2026-01-23T01:08:57.024518411Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:08:57.025659 containerd[1569]: time="2026-01-23T01:08:57.024650051Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:08:57.025659 containerd[1569]: time="2026-01-23T01:08:57.024972999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:08:57.025659 containerd[1569]: time="2026-01-23T01:08:57.025057589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:08:57.025659 containerd[1569]: time="2026-01-23T01:08:57.025083233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:08:57.025659 containerd[1569]: time="2026-01-23T01:08:57.025150571Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:08:57.025659 containerd[1569]: time="2026-01-23T01:08:57.025175427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:08:57.029727 containerd[1569]: time="2026-01-23T01:08:57.025193764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:08:57.029727 containerd[1569]: time="2026-01-23T01:08:57.025996023Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:08:57.029727 containerd[1569]: time="2026-01-23T01:08:57.026078826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:08:57.029727 containerd[1569]: time="2026-01-23T01:08:57.026589564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:08:57.029727 containerd[1569]: time="2026-01-23T01:08:57.026618130Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031564222Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031656452Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031677486Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031695909Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031728634Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031748781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031781219Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031841261Z" level=info msg="runtime interface created" Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031852325Z" level=info msg="created NRI interface" Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031894930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031921296Z" level=info msg="Connect containerd service" Jan 23 01:08:57.033545 containerd[1569]: time="2026-01-23T01:08:57.031981789Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:08:57.042905 containerd[1569]: time="2026-01-23T01:08:57.040058085Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:08:57.144071 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:08:57.149181 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:08:57.155495 dbus-daemon[1525]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1613 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:08:57.162092 systemd-coredump[1621]: Process 1533 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1533: #0 0x000055a869bc1aeb n/a (ntpd + 0x68aeb) #1 0x000055a869b6acdf n/a (ntpd + 0x11cdf) #2 0x000055a869b6b575 n/a (ntpd + 0x12575) #3 0x000055a869b66d8a n/a (ntpd + 0xdd8a) #4 0x000055a869b685d3 n/a (ntpd + 0xf5d3) #5 0x000055a869b70fd1 n/a (ntpd + 0x17fd1) #6 0x000055a869b61c2d n/a (ntpd + 0x8c2d) #7 0x00007f355fc0716c n/a (libc.so.6 + 0x2716c) #8 0x00007f355fc07229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055a869b61c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:08:57.167779 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:08:57.176053 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:08:57.176273 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:08:57.182933 systemd[1]: systemd-coredump@0-1619-0.service: Deactivated successfully. Jan 23 01:08:57.304298 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:08:57.313646 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 01:08:57.333196 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:08:57.447434 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:08:57.459506 ntpd[1671]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: ---------------------------------------------------- Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: corporation. Support and training for ntp-4 are Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: available at https://www.nwtime.org/support Jan 23 01:08:57.462245 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: ---------------------------------------------------- Jan 23 01:08:57.460805 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:08:57.460158 ntpd[1671]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:08:57.466292 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: proto: precision = 0.111 usec (-23) Jan 23 01:08:57.460177 ntpd[1671]: ---------------------------------------------------- Jan 23 01:08:57.460191 ntpd[1671]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:08:57.460950 ntpd[1671]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:08:57.460965 ntpd[1671]: corporation. Support and training for ntp-4 are Jan 23 01:08:57.460980 ntpd[1671]: available at https://www.nwtime.org/support Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: basedate set to 2026-01-10 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: gps base set to 2026-01-11 (week 2401) Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listen normally on 3 eth0 10.128.0.88:123 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listen normally on 4 lo [::1]:123 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:58%2]:123 Jan 23 01:08:57.469766 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: Listening on routing socket on fd #22 for interface updates Jan 23 01:08:57.460993 ntpd[1671]: ---------------------------------------------------- Jan 23 01:08:57.464423 ntpd[1671]: proto: precision = 0.111 usec (-23) Jan 23 01:08:57.467792 ntpd[1671]: basedate set to 2026-01-10 Jan 23 01:08:57.467814 ntpd[1671]: gps base set to 2026-01-11 (week 2401) Jan 23 01:08:57.467940 ntpd[1671]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:08:57.467979 ntpd[1671]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:08:57.468225 ntpd[1671]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:08:57.468264 ntpd[1671]: Listen normally on 3 eth0 10.128.0.88:123 Jan 23 01:08:57.468313 ntpd[1671]: Listen normally on 4 lo [::1]:123 Jan 23 01:08:57.468352 ntpd[1671]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:58%2]:123 Jan 23 01:08:57.468388 ntpd[1671]: Listening on routing socket on fd #22 for interface updates Jan 23 01:08:57.478342 ntpd[1671]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:08:57.486563 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:08:57.486563 ntpd[1671]: 23 Jan 01:08:57 ntpd[1671]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:08:57.484064 ntpd[1671]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:08:57.529862 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:08:57.530741 containerd[1569]: time="2026-01-23T01:08:57.530484963Z" level=info msg="Start subscribing containerd event" Jan 23 01:08:57.530741 containerd[1569]: time="2026-01-23T01:08:57.530620132Z" level=info msg="Start recovering state" Jan 23 01:08:57.530877 containerd[1569]: time="2026-01-23T01:08:57.530809469Z" level=info msg="Start event monitor" Jan 23 01:08:57.530877 containerd[1569]: time="2026-01-23T01:08:57.530852756Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:08:57.530877 containerd[1569]: time="2026-01-23T01:08:57.530872917Z" level=info msg="Start streaming server" Jan 23 01:08:57.531022 containerd[1569]: time="2026-01-23T01:08:57.530897691Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:08:57.531022 containerd[1569]: time="2026-01-23T01:08:57.530932685Z" level=info msg="runtime interface starting up..." Jan 23 01:08:57.531022 containerd[1569]: time="2026-01-23T01:08:57.530950994Z" level=info msg="starting plugins..." Jan 23 01:08:57.531022 containerd[1569]: time="2026-01-23T01:08:57.530973880Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:08:57.533760 containerd[1569]: time="2026-01-23T01:08:57.532710871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:08:57.533760 containerd[1569]: time="2026-01-23T01:08:57.532802891Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:08:57.533760 containerd[1569]: time="2026-01-23T01:08:57.532886767Z" level=info msg="containerd successfully booted in 0.662617s" Jan 23 01:08:57.532832 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:08:57.542307 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:08:57.556045 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:08:57.603192 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:08:57.616849 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:08:57.629077 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:08:57.639021 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:08:57.679976 polkitd[1655]: Started polkitd version 126 Jan 23 01:08:57.699221 polkitd[1655]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:08:57.700050 polkitd[1655]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:08:57.700130 polkitd[1655]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:08:57.701089 polkitd[1655]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:08:57.701146 polkitd[1655]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:08:57.701207 polkitd[1655]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:08:57.703087 polkitd[1655]: Finished loading, compiling and executing 2 rules Jan 23 01:08:57.703645 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:08:57.704768 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:08:57.705650 polkitd[1655]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:08:57.741658 systemd-hostnamed[1613]: Hostname set to (transient) Jan 23 01:08:57.743952 systemd-resolved[1372]: System hostname changed to 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba'. Jan 23 01:08:57.868556 tar[1553]: linux-amd64/README.md Jan 23 01:08:57.891087 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:08:58.034179 instance-setup[1633]: INFO Running google_set_multiqueue. Jan 23 01:08:58.057731 instance-setup[1633]: INFO Set channels for eth0 to 2. Jan 23 01:08:58.062856 instance-setup[1633]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 23 01:08:58.065309 instance-setup[1633]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 23 01:08:58.065377 instance-setup[1633]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 23 01:08:58.067025 instance-setup[1633]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 23 01:08:58.067628 instance-setup[1633]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 23 01:08:58.070132 instance-setup[1633]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 23 01:08:58.070189 instance-setup[1633]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 23 01:08:58.072208 instance-setup[1633]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 23 01:08:58.081180 instance-setup[1633]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 23 01:08:58.085789 instance-setup[1633]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 23 01:08:58.088163 instance-setup[1633]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 23 01:08:58.089037 instance-setup[1633]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 23 01:08:58.112875 init.sh[1630]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 23 01:08:58.281557 startup-script[1731]: INFO Starting startup scripts. Jan 23 01:08:58.288441 startup-script[1731]: INFO No startup scripts found in metadata. Jan 23 01:08:58.288810 startup-script[1731]: INFO Finished running startup scripts. Jan 23 01:08:58.311183 init.sh[1630]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 23 01:08:58.311183 init.sh[1630]: + daemon_pids=() Jan 23 01:08:58.311183 init.sh[1630]: + for d in accounts clock_skew network Jan 23 01:08:58.311463 init.sh[1630]: + daemon_pids+=($!) Jan 23 01:08:58.311463 init.sh[1630]: + for d in accounts clock_skew network Jan 23 01:08:58.312041 init.sh[1630]: + daemon_pids+=($!) Jan 23 01:08:58.312041 init.sh[1630]: + for d in accounts clock_skew network Jan 23 01:08:58.312170 init.sh[1735]: + /usr/bin/google_clock_skew_daemon Jan 23 01:08:58.312919 init.sh[1630]: + daemon_pids+=($!) Jan 23 01:08:58.312919 init.sh[1630]: + NOTIFY_SOCKET=/run/systemd/notify Jan 23 01:08:58.312919 init.sh[1630]: + /usr/bin/systemd-notify --ready Jan 23 01:08:58.313096 init.sh[1736]: + /usr/bin/google_network_daemon Jan 23 01:08:58.313999 init.sh[1734]: + /usr/bin/google_accounts_daemon Jan 23 01:08:58.327126 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 23 01:08:58.339705 init.sh[1630]: + wait -n 1734 1735 1736 Jan 23 01:08:58.635782 google-clock-skew[1735]: INFO Starting Google Clock Skew daemon. Jan 23 01:08:58.650407 google-clock-skew[1735]: INFO Clock drift token has changed: 0. Jan 23 01:08:58.683497 google-networking[1736]: INFO Starting Google Networking daemon. Jan 23 01:08:58.784019 groupadd[1746]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 23 01:08:58.787652 groupadd[1746]: group added to /etc/gshadow: name=google-sudoers Jan 23 01:08:58.857755 groupadd[1746]: new group: name=google-sudoers, GID=1000 Jan 23 01:08:58.892912 google-accounts[1734]: INFO Starting Google Accounts daemon. Jan 23 01:08:58.909006 google-accounts[1734]: WARNING OS Login not installed. Jan 23 01:08:58.911797 google-accounts[1734]: INFO Creating a new user account for 0. Jan 23 01:08:58.918419 init.sh[1754]: useradd: invalid user name '0': use --badname to ignore Jan 23 01:08:58.918860 google-accounts[1734]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 23 01:08:59.034784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:59.045563 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:08:59.055304 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:08:59.055611 systemd[1]: Startup finished in 4.178s (kernel) + 8.767s (initrd) + 8.545s (userspace) = 21.492s. Jan 23 01:08:59.000084 systemd-resolved[1372]: Clock change detected. Flushing caches. Jan 23 01:08:59.016721 systemd-journald[1146]: Time jumped backwards, rotating. Jan 23 01:08:59.000620 google-clock-skew[1735]: INFO Synced system time with hardware clock. Jan 23 01:08:59.181788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:08:59.184253 systemd[1]: Started sshd@0-10.128.0.88:22-4.153.228.146:48344.service - OpenSSH per-connection server daemon (4.153.228.146:48344). Jan 23 01:08:59.469235 sshd[1772]: Accepted publickey for core from 4.153.228.146 port 48344 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:08:59.472265 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:59.484376 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:08:59.486195 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:08:59.513348 systemd-logind[1542]: New session 1 of user core. Jan 23 01:08:59.523453 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:08:59.530031 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:08:59.560821 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:08:59.566660 systemd-logind[1542]: New session c1 of user core. Jan 23 01:08:59.820541 systemd[1777]: Queued start job for default target default.target. Jan 23 01:08:59.828038 systemd[1777]: Created slice app.slice - User Application Slice. Jan 23 01:08:59.828091 systemd[1777]: Reached target paths.target - Paths. Jan 23 01:08:59.828165 systemd[1777]: Reached target timers.target - Timers. Jan 23 01:08:59.830754 systemd[1777]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:08:59.856555 systemd[1777]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:08:59.857151 systemd[1777]: Reached target sockets.target - Sockets. Jan 23 01:08:59.857341 systemd[1777]: Reached target basic.target - Basic System. Jan 23 01:08:59.857460 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:08:59.857996 systemd[1777]: Reached target default.target - Main User Target. Jan 23 01:08:59.858059 systemd[1777]: Startup finished in 278ms. Jan 23 01:08:59.865168 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:08:59.957151 kubelet[1761]: E0123 01:08:59.957064 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:08:59.960267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:08:59.960546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:08:59.961223 systemd[1]: kubelet.service: Consumed 1.343s CPU time, 266M memory peak. Jan 23 01:09:00.045568 systemd[1]: Started sshd@1-10.128.0.88:22-4.153.228.146:48354.service - OpenSSH per-connection server daemon (4.153.228.146:48354). Jan 23 01:09:00.278797 sshd[1790]: Accepted publickey for core from 4.153.228.146 port 48354 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:09:00.279524 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:00.288079 systemd-logind[1542]: New session 2 of user core. Jan 23 01:09:00.293238 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:09:00.444547 sshd[1793]: Connection closed by 4.153.228.146 port 48354 Jan 23 01:09:00.446199 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:00.452262 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:09:00.452649 systemd[1]: sshd@1-10.128.0.88:22-4.153.228.146:48354.service: Deactivated successfully. Jan 23 01:09:00.455285 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:09:00.457791 systemd-logind[1542]: Removed session 2. Jan 23 01:09:00.488240 systemd[1]: Started sshd@2-10.128.0.88:22-4.153.228.146:48370.service - OpenSSH per-connection server daemon (4.153.228.146:48370). Jan 23 01:09:00.737602 sshd[1799]: Accepted publickey for core from 4.153.228.146 port 48370 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:09:00.739429 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:00.747009 systemd-logind[1542]: New session 3 of user core. Jan 23 01:09:00.756224 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:09:00.900423 sshd[1802]: Connection closed by 4.153.228.146 port 48370 Jan 23 01:09:00.901301 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:00.907722 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:09:00.908399 systemd[1]: sshd@2-10.128.0.88:22-4.153.228.146:48370.service: Deactivated successfully. Jan 23 01:09:00.911159 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:09:00.913543 systemd-logind[1542]: Removed session 3. Jan 23 01:09:00.945695 systemd[1]: Started sshd@3-10.128.0.88:22-4.153.228.146:48382.service - OpenSSH per-connection server daemon (4.153.228.146:48382). Jan 23 01:09:01.218048 sshd[1808]: Accepted publickey for core from 4.153.228.146 port 48382 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:09:01.219976 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:01.226984 systemd-logind[1542]: New session 4 of user core. Jan 23 01:09:01.234209 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:09:01.402848 sshd[1811]: Connection closed by 4.153.228.146 port 48382 Jan 23 01:09:01.404289 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:01.410617 systemd[1]: sshd@3-10.128.0.88:22-4.153.228.146:48382.service: Deactivated successfully. Jan 23 01:09:01.413432 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:09:01.415027 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:09:01.417164 systemd-logind[1542]: Removed session 4. Jan 23 01:09:01.451438 systemd[1]: Started sshd@4-10.128.0.88:22-4.153.228.146:48390.service - OpenSSH per-connection server daemon (4.153.228.146:48390). Jan 23 01:09:01.687272 sshd[1817]: Accepted publickey for core from 4.153.228.146 port 48390 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:09:01.689635 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:01.697673 systemd-logind[1542]: New session 5 of user core. Jan 23 01:09:01.705215 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:09:01.851337 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:09:01.852013 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:09:01.869734 sudo[1821]: pam_unix(sudo:session): session closed for user root Jan 23 01:09:01.902948 sshd[1820]: Connection closed by 4.153.228.146 port 48390 Jan 23 01:09:01.902050 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:01.908795 systemd[1]: sshd@4-10.128.0.88:22-4.153.228.146:48390.service: Deactivated successfully. Jan 23 01:09:01.911686 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:09:01.914155 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:09:01.917247 systemd-logind[1542]: Removed session 5. Jan 23 01:09:01.949871 systemd[1]: Started sshd@5-10.128.0.88:22-4.153.228.146:48394.service - OpenSSH per-connection server daemon (4.153.228.146:48394). Jan 23 01:09:02.217886 sshd[1827]: Accepted publickey for core from 4.153.228.146 port 48394 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:09:02.219602 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:02.226990 systemd-logind[1542]: New session 6 of user core. Jan 23 01:09:02.236194 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:09:02.375479 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:09:02.375988 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:09:02.383249 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 23 01:09:02.397668 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:09:02.398168 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:09:02.411151 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:09:02.464291 augenrules[1854]: No rules Jan 23 01:09:02.466044 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:09:02.466369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:09:02.469470 sudo[1831]: pam_unix(sudo:session): session closed for user root Jan 23 01:09:02.505307 sshd[1830]: Connection closed by 4.153.228.146 port 48394 Jan 23 01:09:02.506120 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:02.512259 systemd[1]: sshd@5-10.128.0.88:22-4.153.228.146:48394.service: Deactivated successfully. Jan 23 01:09:02.514689 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:09:02.516740 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:09:02.518380 systemd-logind[1542]: Removed session 6. Jan 23 01:09:02.548696 systemd[1]: Started sshd@6-10.128.0.88:22-4.153.228.146:48404.service - OpenSSH per-connection server daemon (4.153.228.146:48404). Jan 23 01:09:02.796481 sshd[1863]: Accepted publickey for core from 4.153.228.146 port 48404 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:09:02.798312 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:02.805655 systemd-logind[1542]: New session 7 of user core. Jan 23 01:09:02.817233 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:09:02.944423 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:09:02.944945 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:09:03.442123 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:09:03.458622 (dockerd)[1884]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:09:03.832360 dockerd[1884]: time="2026-01-23T01:09:03.832185846Z" level=info msg="Starting up" Jan 23 01:09:03.833382 dockerd[1884]: time="2026-01-23T01:09:03.833343487Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:09:03.851457 dockerd[1884]: time="2026-01-23T01:09:03.851389862Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:09:03.883499 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1545494861-merged.mount: Deactivated successfully. Jan 23 01:09:04.043756 dockerd[1884]: time="2026-01-23T01:09:04.043439163Z" level=info msg="Loading containers: start." Jan 23 01:09:04.061963 kernel: Initializing XFRM netlink socket Jan 23 01:09:04.416706 systemd-networkd[1460]: docker0: Link UP Jan 23 01:09:04.423317 dockerd[1884]: time="2026-01-23T01:09:04.423242420Z" level=info msg="Loading containers: done." Jan 23 01:09:04.443940 dockerd[1884]: time="2026-01-23T01:09:04.443852379Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:09:04.444135 dockerd[1884]: time="2026-01-23T01:09:04.443993575Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:09:04.444135 dockerd[1884]: time="2026-01-23T01:09:04.444129373Z" level=info msg="Initializing buildkit" Jan 23 01:09:04.477251 dockerd[1884]: time="2026-01-23T01:09:04.477180380Z" level=info msg="Completed buildkit initialization" Jan 23 01:09:04.489336 dockerd[1884]: time="2026-01-23T01:09:04.489251290Z" level=info msg="Daemon has completed initialization" Jan 23 01:09:04.490038 dockerd[1884]: time="2026-01-23T01:09:04.489565924Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:09:04.489647 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:09:05.507330 containerd[1569]: time="2026-01-23T01:09:05.507273965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 01:09:05.999811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658699347.mount: Deactivated successfully. Jan 23 01:09:07.642877 containerd[1569]: time="2026-01-23T01:09:07.642800374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:07.644299 containerd[1569]: time="2026-01-23T01:09:07.644235777Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29078734" Jan 23 01:09:07.645794 containerd[1569]: time="2026-01-23T01:09:07.645722271Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:07.649166 containerd[1569]: time="2026-01-23T01:09:07.649103306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:07.650774 containerd[1569]: time="2026-01-23T01:09:07.650439309Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.143109111s" Jan 23 01:09:07.650774 containerd[1569]: time="2026-01-23T01:09:07.650492337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 01:09:07.651278 containerd[1569]: time="2026-01-23T01:09:07.651244914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 01:09:09.135724 containerd[1569]: time="2026-01-23T01:09:09.135648194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:09.137336 containerd[1569]: time="2026-01-23T01:09:09.137136854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24995412" Jan 23 01:09:09.138593 containerd[1569]: time="2026-01-23T01:09:09.138544693Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:09.142378 containerd[1569]: time="2026-01-23T01:09:09.142334497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:09.143953 containerd[1569]: time="2026-01-23T01:09:09.143770737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.49248103s" Jan 23 01:09:09.143953 containerd[1569]: time="2026-01-23T01:09:09.143819475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 01:09:09.144668 containerd[1569]: time="2026-01-23T01:09:09.144614463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 01:09:10.211515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:09:10.217187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:10.552499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:10.565597 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:09:10.572298 containerd[1569]: time="2026-01-23T01:09:10.572210156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:10.573929 containerd[1569]: time="2026-01-23T01:09:10.573699791Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19407116" Jan 23 01:09:10.575980 containerd[1569]: time="2026-01-23T01:09:10.575923345Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:10.583521 containerd[1569]: time="2026-01-23T01:09:10.583464993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.438807388s" Jan 23 01:09:10.583784 containerd[1569]: time="2026-01-23T01:09:10.583694780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 01:09:10.584055 containerd[1569]: time="2026-01-23T01:09:10.583770029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:10.585463 containerd[1569]: time="2026-01-23T01:09:10.584737169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:09:10.638585 kubelet[2169]: E0123 01:09:10.638534 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:09:10.643690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:09:10.644043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:09:10.645627 systemd[1]: kubelet.service: Consumed 254ms CPU time, 110.7M memory peak. Jan 23 01:09:11.867008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385215672.mount: Deactivated successfully. Jan 23 01:09:12.538688 containerd[1569]: time="2026-01-23T01:09:12.538612813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:12.540164 containerd[1569]: time="2026-01-23T01:09:12.539931151Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31163922" Jan 23 01:09:12.541456 containerd[1569]: time="2026-01-23T01:09:12.541403076Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:12.544502 containerd[1569]: time="2026-01-23T01:09:12.544464761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:12.545539 containerd[1569]: time="2026-01-23T01:09:12.545350268Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.960567737s" Jan 23 01:09:12.545539 containerd[1569]: time="2026-01-23T01:09:12.545399925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:09:12.546494 containerd[1569]: time="2026-01-23T01:09:12.546451841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 01:09:12.924558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645122980.mount: Deactivated successfully. Jan 23 01:09:14.254816 containerd[1569]: time="2026-01-23T01:09:14.254750151Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18572327" Jan 23 01:09:14.255520 containerd[1569]: time="2026-01-23T01:09:14.255481486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:14.257534 containerd[1569]: time="2026-01-23T01:09:14.257492999Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:14.260752 containerd[1569]: time="2026-01-23T01:09:14.260704126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:14.262158 containerd[1569]: time="2026-01-23T01:09:14.262116300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.715620372s" Jan 23 01:09:14.262311 containerd[1569]: time="2026-01-23T01:09:14.262287366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 01:09:14.263169 containerd[1569]: time="2026-01-23T01:09:14.263118897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:09:14.607326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439022133.mount: Deactivated successfully. Jan 23 01:09:14.612962 containerd[1569]: time="2026-01-23T01:09:14.612864119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:09:14.614200 containerd[1569]: time="2026-01-23T01:09:14.613873735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322136" Jan 23 01:09:14.615548 containerd[1569]: time="2026-01-23T01:09:14.615499626Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:09:14.618970 containerd[1569]: time="2026-01-23T01:09:14.618928126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:09:14.620159 containerd[1569]: time="2026-01-23T01:09:14.619798387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 356.641639ms" Jan 23 01:09:14.620159 containerd[1569]: time="2026-01-23T01:09:14.619846291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:09:14.620851 containerd[1569]: time="2026-01-23T01:09:14.620810035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 01:09:15.020096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946446479.mount: Deactivated successfully. Jan 23 01:09:17.541260 containerd[1569]: time="2026-01-23T01:09:17.541179188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:17.542780 containerd[1569]: time="2026-01-23T01:09:17.542722223Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57690069" Jan 23 01:09:17.544392 containerd[1569]: time="2026-01-23T01:09:17.544321024Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:17.548014 containerd[1569]: time="2026-01-23T01:09:17.547928227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:17.549696 containerd[1569]: time="2026-01-23T01:09:17.549378161Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.928371734s" Jan 23 01:09:17.549696 containerd[1569]: time="2026-01-23T01:09:17.549428543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 01:09:20.894834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:09:20.899423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:21.173988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:09:21.174130 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:09:21.174561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:21.175062 systemd[1]: kubelet.service: Consumed 126ms CPU time, 78M memory peak. Jan 23 01:09:21.185743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:21.225902 systemd[1]: Reload requested from client PID 2327 ('systemctl') (unit session-7.scope)... Jan 23 01:09:21.226159 systemd[1]: Reloading... Jan 23 01:09:21.420959 zram_generator::config[2372]: No configuration found. Jan 23 01:09:21.744273 systemd[1]: Reloading finished in 517 ms. Jan 23 01:09:21.836584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:09:21.836685 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:09:21.837153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:21.837222 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.3M memory peak. Jan 23 01:09:21.840334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:22.475250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:22.491620 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:09:22.552070 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:09:22.552070 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:09:22.552070 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:09:22.552818 kubelet[2423]: I0123 01:09:22.552171 2423 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:09:22.990702 kubelet[2423]: I0123 01:09:22.990633 2423 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:09:22.990702 kubelet[2423]: I0123 01:09:22.990679 2423 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:09:22.992135 kubelet[2423]: I0123 01:09:22.992087 2423 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:09:23.051126 kubelet[2423]: E0123 01:09:23.051073 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:09:23.053531 kubelet[2423]: I0123 01:09:23.053233 2423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:09:23.066846 kubelet[2423]: I0123 01:09:23.066739 2423 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:09:23.072810 kubelet[2423]: I0123 01:09:23.071778 2423 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:09:23.075722 kubelet[2423]: I0123 01:09:23.074969 2423 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:09:23.075722 kubelet[2423]: I0123 01:09:23.075059 2423 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:09:23.075722 kubelet[2423]: I0123 01:09:23.075392 2423 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:09:23.075722 kubelet[2423]: I0123 01:09:23.075412 2423 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:09:23.076115 kubelet[2423]: I0123 01:09:23.075631 2423 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:09:23.082142 kubelet[2423]: I0123 01:09:23.082101 2423 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:09:23.082142 kubelet[2423]: I0123 01:09:23.082162 2423 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:09:23.082381 kubelet[2423]: I0123 01:09:23.082207 2423 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:09:23.082381 kubelet[2423]: I0123 01:09:23.082224 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:09:23.092944 kubelet[2423]: W0123 01:09:23.092614 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.88:6443: connect: connection refused Jan 23 01:09:23.092944 kubelet[2423]: E0123 01:09:23.092716 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:09:23.092944 kubelet[2423]: W0123 01:09:23.092828 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba&limit=500&resourceVersion=0": dial tcp 10.128.0.88:6443: connect: connection refused Jan 23 01:09:23.092944 kubelet[2423]: E0123 01:09:23.092879 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba&limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:09:23.094708 kubelet[2423]: I0123 01:09:23.094461 2423 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:09:23.095162 kubelet[2423]: I0123 01:09:23.095126 2423 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:09:23.096960 kubelet[2423]: W0123 01:09:23.096557 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:09:23.101118 kubelet[2423]: I0123 01:09:23.101091 2423 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:09:23.101296 kubelet[2423]: I0123 01:09:23.101283 2423 server.go:1287] "Started kubelet" Jan 23 01:09:23.106236 kubelet[2423]: I0123 01:09:23.106175 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:09:23.110694 kubelet[2423]: I0123 01:09:23.110072 2423 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:09:23.112372 kubelet[2423]: I0123 01:09:23.112282 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:09:23.112774 kubelet[2423]: I0123 01:09:23.112727 2423 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:09:23.114945 kubelet[2423]: I0123 01:09:23.114189 2423 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:09:23.119091 kubelet[2423]: I0123 01:09:23.118750 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:09:23.122963 kubelet[2423]: I0123 01:09:23.122698 2423 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:09:23.123064 kubelet[2423]: E0123 01:09:23.123035 2423 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" Jan 23 01:09:23.123155 kubelet[2423]: I0123 01:09:23.123131 2423 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:09:23.123203 kubelet[2423]: I0123 01:09:23.123194 2423 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:09:23.123947 kubelet[2423]: W0123 01:09:23.123837 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.88:6443: connect: connection refused Jan 23 01:09:23.124054 kubelet[2423]: E0123 01:09:23.123961 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:09:23.124114 kubelet[2423]: E0123 01:09:23.124076 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="200ms" Jan 23 01:09:23.124379 kubelet[2423]: I0123 01:09:23.124337 2423 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:09:23.124455 kubelet[2423]: I0123 01:09:23.124442 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:09:23.133139 kubelet[2423]: E0123 01:09:23.130734 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba.188d36dfb6efd4ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,UID:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,},FirstTimestamp:2026-01-23 01:09:23.101250798 +0000 UTC m=+0.604025597,LastTimestamp:2026-01-23 01:09:23.101250798 +0000 UTC m=+0.604025597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,}" Jan 23 01:09:23.134972 kubelet[2423]: I0123 01:09:23.134395 2423 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:09:23.150741 kubelet[2423]: I0123 01:09:23.150679 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:09:23.153084 kubelet[2423]: I0123 01:09:23.153049 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:09:23.153255 kubelet[2423]: I0123 01:09:23.153240 2423 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:09:23.153426 kubelet[2423]: I0123 01:09:23.153408 2423 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:09:23.153872 kubelet[2423]: I0123 01:09:23.153502 2423 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:09:23.153872 kubelet[2423]: E0123 01:09:23.153597 2423 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:09:23.163224 kubelet[2423]: W0123 01:09:23.163153 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.88:6443: connect: connection refused Jan 23 01:09:23.163343 kubelet[2423]: E0123 01:09:23.163227 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:09:23.167689 kubelet[2423]: E0123 01:09:23.167654 2423 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:09:23.179345 kubelet[2423]: I0123 01:09:23.179306 2423 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:09:23.179483 kubelet[2423]: I0123 01:09:23.179396 2423 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:09:23.179483 kubelet[2423]: I0123 01:09:23.179424 2423 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:09:23.182685 kubelet[2423]: I0123 01:09:23.182617 2423 policy_none.go:49] "None policy: Start" Jan 23 01:09:23.182685 kubelet[2423]: I0123 01:09:23.182653 2423 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:09:23.182685 kubelet[2423]: I0123 01:09:23.182673 2423 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:09:23.190896 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:09:23.205324 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:09:23.210514 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:09:23.222348 kubelet[2423]: I0123 01:09:23.222307 2423 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:09:23.222727 kubelet[2423]: I0123 01:09:23.222620 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:09:23.222727 kubelet[2423]: I0123 01:09:23.222645 2423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:09:23.224158 kubelet[2423]: I0123 01:09:23.224103 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:09:23.225618 kubelet[2423]: E0123 01:09:23.225513 2423 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:09:23.225618 kubelet[2423]: E0123 01:09:23.225574 2423 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" Jan 23 01:09:23.276229 systemd[1]: Created slice kubepods-burstable-podd8867f1234da07b0bb06a3897d0b459f.slice - libcontainer container kubepods-burstable-podd8867f1234da07b0bb06a3897d0b459f.slice. Jan 23 01:09:23.296065 kubelet[2423]: E0123 01:09:23.295978 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.300359 systemd[1]: Created slice kubepods-burstable-pod8594ad12744bceaeb64f85a375e5a7ad.slice - libcontainer container kubepods-burstable-pod8594ad12744bceaeb64f85a375e5a7ad.slice. Jan 23 01:09:23.303868 kubelet[2423]: E0123 01:09:23.303827 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.309757 systemd[1]: Created slice kubepods-burstable-pod483a400539e522dde0c7b9d971c20226.slice - libcontainer container kubepods-burstable-pod483a400539e522dde0c7b9d971c20226.slice. Jan 23 01:09:23.312560 kubelet[2423]: E0123 01:09:23.312524 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.325498 kubelet[2423]: E0123 01:09:23.325420 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="400ms" Jan 23 01:09:23.330629 kubelet[2423]: I0123 01:09:23.330580 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.331139 kubelet[2423]: E0123 01:09:23.331100 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.424783 kubelet[2423]: I0123 01:09:23.424705 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8867f1234da07b0bb06a3897d0b459f-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"d8867f1234da07b0bb06a3897d0b459f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.424783 kubelet[2423]: I0123 01:09:23.424778 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425054 kubelet[2423]: I0123 01:09:23.424806 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425054 kubelet[2423]: I0123 01:09:23.424833 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425054 kubelet[2423]: I0123 01:09:23.424858 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425054 kubelet[2423]: I0123 01:09:23.424884 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425247 kubelet[2423]: I0123 01:09:23.424937 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/483a400539e522dde0c7b9d971c20226-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"483a400539e522dde0c7b9d971c20226\") " pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425247 kubelet[2423]: I0123 01:09:23.424966 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8867f1234da07b0bb06a3897d0b459f-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"d8867f1234da07b0bb06a3897d0b459f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.425247 kubelet[2423]: I0123 01:09:23.424992 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8867f1234da07b0bb06a3897d0b459f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"d8867f1234da07b0bb06a3897d0b459f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.537203 kubelet[2423]: I0123 01:09:23.536890 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.538292 kubelet[2423]: E0123 01:09:23.538247 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.598296 containerd[1569]: time="2026-01-23T01:09:23.598194159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,Uid:d8867f1234da07b0bb06a3897d0b459f,Namespace:kube-system,Attempt:0,}" Jan 23 01:09:23.605577 containerd[1569]: time="2026-01-23T01:09:23.605518249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,Uid:8594ad12744bceaeb64f85a375e5a7ad,Namespace:kube-system,Attempt:0,}" Jan 23 01:09:23.613772 containerd[1569]: time="2026-01-23T01:09:23.613639019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,Uid:483a400539e522dde0c7b9d971c20226,Namespace:kube-system,Attempt:0,}" Jan 23 01:09:23.632784 containerd[1569]: time="2026-01-23T01:09:23.632687529Z" level=info msg="connecting to shim 361fa94437ed968559129143a3f3435605babda7deee7fdfac597896d73b02c3" address="unix:///run/containerd/s/d241e3054f8a14f4d9d16e7943fb47c5600b21cdb41bc8accc1995d42f15cb07" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:23.683940 containerd[1569]: time="2026-01-23T01:09:23.683529118Z" level=info msg="connecting to shim cf574b74342203e597570d88930f764738d09b4b4058f783045d42cc8870e81a" address="unix:///run/containerd/s/add0d0a6fc08d8aef60e0504678fcd21801502f477625ab99799794ef4354c4c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:23.705864 systemd[1]: Started cri-containerd-361fa94437ed968559129143a3f3435605babda7deee7fdfac597896d73b02c3.scope - libcontainer container 361fa94437ed968559129143a3f3435605babda7deee7fdfac597896d73b02c3. Jan 23 01:09:23.723013 containerd[1569]: time="2026-01-23T01:09:23.722517861Z" level=info msg="connecting to shim f53694279c11035d4f89518c2bfb0e54dcda1356728d60421d20e4eec75bb4f7" address="unix:///run/containerd/s/2647caa35425c22dda6979e4303f420601e3540b155eb5735ba63ad69f80bce3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:23.726934 kubelet[2423]: E0123 01:09:23.726873 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="800ms" Jan 23 01:09:23.750219 systemd[1]: Started cri-containerd-cf574b74342203e597570d88930f764738d09b4b4058f783045d42cc8870e81a.scope - libcontainer container cf574b74342203e597570d88930f764738d09b4b4058f783045d42cc8870e81a. Jan 23 01:09:23.788182 systemd[1]: Started cri-containerd-f53694279c11035d4f89518c2bfb0e54dcda1356728d60421d20e4eec75bb4f7.scope - libcontainer container f53694279c11035d4f89518c2bfb0e54dcda1356728d60421d20e4eec75bb4f7. Jan 23 01:09:23.859187 containerd[1569]: time="2026-01-23T01:09:23.859099082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,Uid:d8867f1234da07b0bb06a3897d0b459f,Namespace:kube-system,Attempt:0,} returns sandbox id \"361fa94437ed968559129143a3f3435605babda7deee7fdfac597896d73b02c3\"" Jan 23 01:09:23.867010 kubelet[2423]: E0123 01:09:23.866095 2423 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c4" Jan 23 01:09:23.872219 containerd[1569]: time="2026-01-23T01:09:23.872156641Z" level=info msg="CreateContainer within sandbox \"361fa94437ed968559129143a3f3435605babda7deee7fdfac597896d73b02c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:09:23.893214 containerd[1569]: time="2026-01-23T01:09:23.893162611Z" level=info msg="Container d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:23.910926 containerd[1569]: time="2026-01-23T01:09:23.910749151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,Uid:8594ad12744bceaeb64f85a375e5a7ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf574b74342203e597570d88930f764738d09b4b4058f783045d42cc8870e81a\"" Jan 23 01:09:23.912369 containerd[1569]: time="2026-01-23T01:09:23.912327272Z" level=info msg="CreateContainer within sandbox \"361fa94437ed968559129143a3f3435605babda7deee7fdfac597896d73b02c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672\"" Jan 23 01:09:23.914381 containerd[1569]: time="2026-01-23T01:09:23.914289055Z" level=info msg="StartContainer for \"d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672\"" Jan 23 01:09:23.915597 kubelet[2423]: E0123 01:09:23.915552 2423 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31" Jan 23 01:09:23.917864 containerd[1569]: time="2026-01-23T01:09:23.917795215Z" level=info msg="connecting to shim d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672" address="unix:///run/containerd/s/d241e3054f8a14f4d9d16e7943fb47c5600b21cdb41bc8accc1995d42f15cb07" protocol=ttrpc version=3 Jan 23 01:09:23.922946 containerd[1569]: time="2026-01-23T01:09:23.922212687Z" level=info msg="CreateContainer within sandbox \"cf574b74342203e597570d88930f764738d09b4b4058f783045d42cc8870e81a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:09:23.936082 containerd[1569]: time="2026-01-23T01:09:23.936031598Z" level=info msg="Container 981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:23.956579 kubelet[2423]: I0123 01:09:23.956541 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.957413 kubelet[2423]: E0123 01:09:23.957365 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:23.960280 systemd[1]: Started cri-containerd-d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672.scope - libcontainer container d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672. Jan 23 01:09:23.961802 containerd[1569]: time="2026-01-23T01:09:23.960617273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba,Uid:483a400539e522dde0c7b9d971c20226,Namespace:kube-system,Attempt:0,} returns sandbox id \"f53694279c11035d4f89518c2bfb0e54dcda1356728d60421d20e4eec75bb4f7\"" Jan 23 01:09:23.965405 kubelet[2423]: E0123 01:09:23.965369 2423 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c4" Jan 23 01:09:23.969601 containerd[1569]: time="2026-01-23T01:09:23.969531495Z" level=info msg="CreateContainer within sandbox \"f53694279c11035d4f89518c2bfb0e54dcda1356728d60421d20e4eec75bb4f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:09:23.973491 containerd[1569]: time="2026-01-23T01:09:23.973414801Z" level=info msg="CreateContainer within sandbox \"cf574b74342203e597570d88930f764738d09b4b4058f783045d42cc8870e81a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1\"" Jan 23 01:09:23.976134 containerd[1569]: time="2026-01-23T01:09:23.976098724Z" level=info msg="StartContainer for \"981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1\"" Jan 23 01:09:23.978826 containerd[1569]: time="2026-01-23T01:09:23.978769276Z" level=info msg="connecting to shim 981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1" address="unix:///run/containerd/s/add0d0a6fc08d8aef60e0504678fcd21801502f477625ab99799794ef4354c4c" protocol=ttrpc version=3 Jan 23 01:09:23.988291 containerd[1569]: time="2026-01-23T01:09:23.988214376Z" level=info msg="Container 9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:24.004758 containerd[1569]: time="2026-01-23T01:09:24.004699470Z" level=info msg="CreateContainer within sandbox \"f53694279c11035d4f89518c2bfb0e54dcda1356728d60421d20e4eec75bb4f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52\"" Jan 23 01:09:24.007506 containerd[1569]: time="2026-01-23T01:09:24.007450100Z" level=info msg="StartContainer for \"9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52\"" Jan 23 01:09:24.012269 containerd[1569]: time="2026-01-23T01:09:24.012211724Z" level=info msg="connecting to shim 9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52" address="unix:///run/containerd/s/2647caa35425c22dda6979e4303f420601e3540b155eb5735ba63ad69f80bce3" protocol=ttrpc version=3 Jan 23 01:09:24.021683 systemd[1]: Started cri-containerd-981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1.scope - libcontainer container 981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1. Jan 23 01:09:24.050483 systemd[1]: Started cri-containerd-9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52.scope - libcontainer container 9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52. Jan 23 01:09:24.104793 containerd[1569]: time="2026-01-23T01:09:24.104727805Z" level=info msg="StartContainer for \"d44753141696b5579e93ea15cd636eaf9a64e77a016a73ba79229a99d7445672\" returns successfully" Jan 23 01:09:24.191950 kubelet[2423]: E0123 01:09:24.191393 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:24.206013 containerd[1569]: time="2026-01-23T01:09:24.205971053Z" level=info msg="StartContainer for \"981674482a4865be73fb8430ecf4ac506f6301c78c821fd46dceb41692cbb8e1\" returns successfully" Jan 23 01:09:24.263281 containerd[1569]: time="2026-01-23T01:09:24.263159771Z" level=info msg="StartContainer for \"9408b6aaf76f507dd0aee863e6a1d61b35c8d3a94a0c55a7f9f5c9b574c09b52\" returns successfully" Jan 23 01:09:24.763997 kubelet[2423]: I0123 01:09:24.763534 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:25.208972 kubelet[2423]: E0123 01:09:25.208579 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:25.209902 kubelet[2423]: E0123 01:09:25.209868 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:25.210270 kubelet[2423]: E0123 01:09:25.210243 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:26.215417 kubelet[2423]: E0123 01:09:26.215379 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:26.216588 kubelet[2423]: E0123 01:09:26.216034 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:26.686794 kubelet[2423]: E0123 01:09:26.686436 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:27.218071 kubelet[2423]: E0123 01:09:27.218026 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:27.218938 kubelet[2423]: E0123 01:09:27.218886 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:27.643403 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:09:27.824543 kubelet[2423]: E0123 01:09:27.824445 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:27.998525 kubelet[2423]: I0123 01:09:27.997945 2423 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:27.998525 kubelet[2423]: E0123 01:09:27.998049 2423 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\": node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" Jan 23 01:09:28.028533 kubelet[2423]: I0123 01:09:28.028458 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.052095 kubelet[2423]: E0123 01:09:28.052012 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.052095 kubelet[2423]: I0123 01:09:28.052059 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.056754 kubelet[2423]: E0123 01:09:28.056443 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.056754 kubelet[2423]: I0123 01:09:28.056492 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.068962 kubelet[2423]: E0123 01:09:28.068101 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.090282 kubelet[2423]: I0123 01:09:28.090198 2423 apiserver.go:52] "Watching apiserver" Jan 23 01:09:28.123469 kubelet[2423]: I0123 01:09:28.123398 2423 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:09:28.212120 kubelet[2423]: I0123 01:09:28.211411 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:28.214199 kubelet[2423]: E0123 01:09:28.214142 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:29.964007 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-7.scope)... Jan 23 01:09:29.964030 systemd[1]: Reloading... Jan 23 01:09:30.106978 zram_generator::config[2737]: No configuration found. Jan 23 01:09:30.441272 systemd[1]: Reloading finished in 476 ms. Jan 23 01:09:30.489559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:30.502239 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:09:30.502815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:30.503035 systemd[1]: kubelet.service: Consumed 1.165s CPU time, 132.8M memory peak. Jan 23 01:09:30.505820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:30.904193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:30.919655 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:09:31.007923 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:09:31.007923 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:09:31.007923 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:09:31.007923 kubelet[2789]: I0123 01:09:31.007571 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:09:31.022006 kubelet[2789]: I0123 01:09:31.021137 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:09:31.022006 kubelet[2789]: I0123 01:09:31.021170 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:09:31.022006 kubelet[2789]: I0123 01:09:31.021595 2789 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:09:31.024939 kubelet[2789]: I0123 01:09:31.024748 2789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 01:09:31.029983 kubelet[2789]: I0123 01:09:31.029413 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:09:31.040946 kubelet[2789]: I0123 01:09:31.040383 2789 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:09:31.045525 kubelet[2789]: I0123 01:09:31.044676 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:09:31.045525 kubelet[2789]: I0123 01:09:31.045026 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:09:31.046160 kubelet[2789]: I0123 01:09:31.045069 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:09:31.046160 kubelet[2789]: I0123 01:09:31.046143 2789 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:09:31.046160 kubelet[2789]: I0123 01:09:31.046164 2789 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:09:31.046847 kubelet[2789]: I0123 01:09:31.046234 2789 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:09:31.046847 kubelet[2789]: I0123 01:09:31.046477 2789 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:09:31.046847 kubelet[2789]: I0123 01:09:31.046510 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:09:31.046847 kubelet[2789]: I0123 01:09:31.046545 2789 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:09:31.046847 kubelet[2789]: I0123 01:09:31.046561 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:09:31.055351 kubelet[2789]: I0123 01:09:31.054192 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:09:31.056767 kubelet[2789]: I0123 01:09:31.056481 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:09:31.061603 kubelet[2789]: I0123 01:09:31.060853 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:09:31.061603 kubelet[2789]: I0123 01:09:31.060931 2789 server.go:1287] "Started kubelet" Jan 23 01:09:31.070940 kubelet[2789]: I0123 01:09:31.070014 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:09:31.072944 kubelet[2789]: I0123 01:09:31.072305 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:09:31.079678 kubelet[2789]: I0123 01:09:31.079546 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:09:31.092142 kubelet[2789]: I0123 01:09:31.090987 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:09:31.094379 kubelet[2789]: I0123 01:09:31.094193 2789 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:09:31.097935 kubelet[2789]: I0123 01:09:31.097481 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:09:31.104340 kubelet[2789]: I0123 01:09:31.103606 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:09:31.107944 kubelet[2789]: E0123 01:09:31.105249 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" not found" Jan 23 01:09:31.122143 kubelet[2789]: I0123 01:09:31.122105 2789 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:09:31.122143 kubelet[2789]: I0123 01:09:31.122136 2789 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:09:31.122366 kubelet[2789]: I0123 01:09:31.122238 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:09:31.126458 kubelet[2789]: I0123 01:09:31.126418 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:09:31.127936 kubelet[2789]: I0123 01:09:31.126807 2789 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:09:31.134058 kubelet[2789]: I0123 01:09:31.133881 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:09:31.136297 kubelet[2789]: I0123 01:09:31.135996 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:09:31.136297 kubelet[2789]: I0123 01:09:31.136038 2789 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:09:31.136297 kubelet[2789]: I0123 01:09:31.136065 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:09:31.136297 kubelet[2789]: I0123 01:09:31.136076 2789 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:09:31.136297 kubelet[2789]: E0123 01:09:31.136142 2789 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:09:31.150762 kubelet[2789]: E0123 01:09:31.150338 2789 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:09:31.237747 kubelet[2789]: I0123 01:09:31.236110 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:09:31.237747 kubelet[2789]: I0123 01:09:31.236135 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:09:31.237747 kubelet[2789]: E0123 01:09:31.237680 2789 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:09:31.238064 kubelet[2789]: I0123 01:09:31.237824 2789 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:09:31.238170 kubelet[2789]: I0123 01:09:31.238145 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:09:31.238235 kubelet[2789]: I0123 01:09:31.238168 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:09:31.238235 kubelet[2789]: I0123 01:09:31.238201 2789 policy_none.go:49] "None policy: Start" Jan 23 01:09:31.238235 kubelet[2789]: I0123 01:09:31.238218 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:09:31.238368 kubelet[2789]: I0123 01:09:31.238236 2789 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:09:31.238651 kubelet[2789]: I0123 01:09:31.238415 2789 state_mem.go:75] "Updated machine memory state" Jan 23 01:09:31.245931 kubelet[2789]: I0123 01:09:31.245886 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:09:31.246942 kubelet[2789]: I0123 01:09:31.246712 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:09:31.246942 kubelet[2789]: I0123 01:09:31.246734 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:09:31.248145 kubelet[2789]: I0123 01:09:31.248125 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:09:31.250008 kubelet[2789]: E0123 01:09:31.249506 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:09:31.373004 kubelet[2789]: I0123 01:09:31.372963 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.384197 kubelet[2789]: I0123 01:09:31.383466 2789 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.384720 kubelet[2789]: I0123 01:09:31.384478 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.440724 kubelet[2789]: I0123 01:09:31.439203 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.440724 kubelet[2789]: I0123 01:09:31.439768 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.440724 kubelet[2789]: I0123 01:09:31.440711 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.448546 kubelet[2789]: W0123 01:09:31.448509 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 23 01:09:31.450981 kubelet[2789]: W0123 01:09:31.450636 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 23 01:09:31.453020 kubelet[2789]: W0123 01:09:31.452979 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 23 01:09:31.529454 kubelet[2789]: I0123 01:09:31.528996 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8867f1234da07b0bb06a3897d0b459f-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"d8867f1234da07b0bb06a3897d0b459f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529454 kubelet[2789]: I0123 01:09:31.529064 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8867f1234da07b0bb06a3897d0b459f-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"d8867f1234da07b0bb06a3897d0b459f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529454 kubelet[2789]: I0123 01:09:31.529100 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529454 kubelet[2789]: I0123 01:09:31.529133 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529773 kubelet[2789]: I0123 01:09:31.529162 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529773 kubelet[2789]: I0123 01:09:31.529193 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529773 kubelet[2789]: I0123 01:09:31.529225 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/483a400539e522dde0c7b9d971c20226-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"483a400539e522dde0c7b9d971c20226\") " pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.529773 kubelet[2789]: I0123 01:09:31.529256 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8867f1234da07b0bb06a3897d0b459f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"d8867f1234da07b0bb06a3897d0b459f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:31.530015 kubelet[2789]: I0123 01:09:31.529277 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8594ad12744bceaeb64f85a375e5a7ad-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" (UID: \"8594ad12744bceaeb64f85a375e5a7ad\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:32.049935 kubelet[2789]: I0123 01:09:32.049861 2789 apiserver.go:52] "Watching apiserver" Jan 23 01:09:32.127183 kubelet[2789]: I0123 01:09:32.127102 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:09:32.201800 kubelet[2789]: I0123 01:09:32.201759 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:32.203749 kubelet[2789]: I0123 01:09:32.202104 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:32.213099 kubelet[2789]: W0123 01:09:32.213039 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 23 01:09:32.214417 kubelet[2789]: E0123 01:09:32.214014 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:32.218193 kubelet[2789]: W0123 01:09:32.218164 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 23 01:09:32.218502 kubelet[2789]: E0123 01:09:32.218397 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:09:32.272654 kubelet[2789]: I0123 01:09:32.272470 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" podStartSLOduration=1.272446827 podStartE2EDuration="1.272446827s" podCreationTimestamp="2026-01-23 01:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:32.254650259 +0000 UTC m=+1.327159127" watchObservedRunningTime="2026-01-23 01:09:32.272446827 +0000 UTC m=+1.344955685" Jan 23 01:09:32.291284 kubelet[2789]: I0123 01:09:32.290936 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" podStartSLOduration=1.290889122 podStartE2EDuration="1.290889122s" podCreationTimestamp="2026-01-23 01:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:32.273327198 +0000 UTC m=+1.345836064" watchObservedRunningTime="2026-01-23 01:09:32.290889122 +0000 UTC m=+1.363397972" Jan 23 01:09:35.008842 kubelet[2789]: I0123 01:09:35.008540 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" podStartSLOduration=4.008512992 podStartE2EDuration="4.008512992s" podCreationTimestamp="2026-01-23 01:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:32.293533682 +0000 UTC m=+1.366042561" watchObservedRunningTime="2026-01-23 01:09:35.008512992 +0000 UTC m=+4.081021859" Jan 23 01:09:35.173705 kubelet[2789]: I0123 01:09:35.173603 2789 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:09:35.174146 containerd[1569]: time="2026-01-23T01:09:35.174099606Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:09:35.174857 kubelet[2789]: I0123 01:09:35.174814 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:09:35.873127 systemd[1]: Created slice kubepods-besteffort-pod3b4736d7_af6a_4b4d_8496_87ca6cdbda92.slice - libcontainer container kubepods-besteffort-pod3b4736d7_af6a_4b4d_8496_87ca6cdbda92.slice. Jan 23 01:09:35.957368 kubelet[2789]: I0123 01:09:35.957305 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b4736d7-af6a-4b4d-8496-87ca6cdbda92-xtables-lock\") pod \"kube-proxy-t9k4d\" (UID: \"3b4736d7-af6a-4b4d-8496-87ca6cdbda92\") " pod="kube-system/kube-proxy-t9k4d" Jan 23 01:09:35.957611 kubelet[2789]: I0123 01:09:35.957425 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b4736d7-af6a-4b4d-8496-87ca6cdbda92-lib-modules\") pod \"kube-proxy-t9k4d\" (UID: \"3b4736d7-af6a-4b4d-8496-87ca6cdbda92\") " pod="kube-system/kube-proxy-t9k4d" Jan 23 01:09:35.957611 kubelet[2789]: I0123 01:09:35.957509 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrl8\" (UniqueName: \"kubernetes.io/projected/3b4736d7-af6a-4b4d-8496-87ca6cdbda92-kube-api-access-hnrl8\") pod \"kube-proxy-t9k4d\" (UID: \"3b4736d7-af6a-4b4d-8496-87ca6cdbda92\") " pod="kube-system/kube-proxy-t9k4d" Jan 23 01:09:35.957611 kubelet[2789]: I0123 01:09:35.957598 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b4736d7-af6a-4b4d-8496-87ca6cdbda92-kube-proxy\") pod \"kube-proxy-t9k4d\" (UID: \"3b4736d7-af6a-4b4d-8496-87ca6cdbda92\") " pod="kube-system/kube-proxy-t9k4d" Jan 23 01:09:36.185859 containerd[1569]: time="2026-01-23T01:09:36.185785308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9k4d,Uid:3b4736d7-af6a-4b4d-8496-87ca6cdbda92,Namespace:kube-system,Attempt:0,}" Jan 23 01:09:36.224269 containerd[1569]: time="2026-01-23T01:09:36.224192526Z" level=info msg="connecting to shim e3f9aa301f35abe05a709d3fa0037cbc82f365cd2a6c5c4cacab6df2550aa15f" address="unix:///run/containerd/s/38703dec85d342245b34015dacc183bdab94360ffcdb214de3ce57c443477a5f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:36.285801 systemd[1]: Started cri-containerd-e3f9aa301f35abe05a709d3fa0037cbc82f365cd2a6c5c4cacab6df2550aa15f.scope - libcontainer container e3f9aa301f35abe05a709d3fa0037cbc82f365cd2a6c5c4cacab6df2550aa15f. Jan 23 01:09:36.333619 systemd[1]: Created slice kubepods-besteffort-pod8239918b_4ee6_4560_ae64_afafd2eb06e7.slice - libcontainer container kubepods-besteffort-pod8239918b_4ee6_4560_ae64_afafd2eb06e7.slice. Jan 23 01:09:36.360067 kubelet[2789]: I0123 01:09:36.359699 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj72v\" (UniqueName: \"kubernetes.io/projected/8239918b-4ee6-4560-ae64-afafd2eb06e7-kube-api-access-sj72v\") pod \"tigera-operator-7dcd859c48-lm92r\" (UID: \"8239918b-4ee6-4560-ae64-afafd2eb06e7\") " pod="tigera-operator/tigera-operator-7dcd859c48-lm92r" Jan 23 01:09:36.360067 kubelet[2789]: I0123 01:09:36.359767 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8239918b-4ee6-4560-ae64-afafd2eb06e7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lm92r\" (UID: \"8239918b-4ee6-4560-ae64-afafd2eb06e7\") " pod="tigera-operator/tigera-operator-7dcd859c48-lm92r" Jan 23 01:09:36.390166 containerd[1569]: time="2026-01-23T01:09:36.390113940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9k4d,Uid:3b4736d7-af6a-4b4d-8496-87ca6cdbda92,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f9aa301f35abe05a709d3fa0037cbc82f365cd2a6c5c4cacab6df2550aa15f\"" Jan 23 01:09:36.395499 containerd[1569]: time="2026-01-23T01:09:36.395451656Z" level=info msg="CreateContainer within sandbox \"e3f9aa301f35abe05a709d3fa0037cbc82f365cd2a6c5c4cacab6df2550aa15f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:09:36.413941 containerd[1569]: time="2026-01-23T01:09:36.411691869Z" level=info msg="Container b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:36.419737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162991052.mount: Deactivated successfully. Jan 23 01:09:36.429765 containerd[1569]: time="2026-01-23T01:09:36.429622100Z" level=info msg="CreateContainer within sandbox \"e3f9aa301f35abe05a709d3fa0037cbc82f365cd2a6c5c4cacab6df2550aa15f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd\"" Jan 23 01:09:36.430951 containerd[1569]: time="2026-01-23T01:09:36.430456987Z" level=info msg="StartContainer for \"b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd\"" Jan 23 01:09:36.433110 containerd[1569]: time="2026-01-23T01:09:36.433008361Z" level=info msg="connecting to shim b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd" address="unix:///run/containerd/s/38703dec85d342245b34015dacc183bdab94360ffcdb214de3ce57c443477a5f" protocol=ttrpc version=3 Jan 23 01:09:36.459139 systemd[1]: Started cri-containerd-b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd.scope - libcontainer container b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd. Jan 23 01:09:36.571880 containerd[1569]: time="2026-01-23T01:09:36.571791576Z" level=info msg="StartContainer for \"b4620dfcfa8a632650dfef1560af5232c1ebb24e4a18a376a029edb47fb285fd\" returns successfully" Jan 23 01:09:36.647973 containerd[1569]: time="2026-01-23T01:09:36.647896241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lm92r,Uid:8239918b-4ee6-4560-ae64-afafd2eb06e7,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:09:36.675936 containerd[1569]: time="2026-01-23T01:09:36.675549402Z" level=info msg="connecting to shim 33ebd7b46961325fb489bede5420a2d5414014ce8976bba8ed967db4912e17e3" address="unix:///run/containerd/s/65810955fafb5fafbe68d42e3b12e1b6624d0f9bbcbe6695a2a70a28b699d400" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:36.718806 systemd[1]: Started cri-containerd-33ebd7b46961325fb489bede5420a2d5414014ce8976bba8ed967db4912e17e3.scope - libcontainer container 33ebd7b46961325fb489bede5420a2d5414014ce8976bba8ed967db4912e17e3. Jan 23 01:09:36.826351 containerd[1569]: time="2026-01-23T01:09:36.826225947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lm92r,Uid:8239918b-4ee6-4560-ae64-afafd2eb06e7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"33ebd7b46961325fb489bede5420a2d5414014ce8976bba8ed967db4912e17e3\"" Jan 23 01:09:36.829299 containerd[1569]: time="2026-01-23T01:09:36.828979315Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:09:37.237978 kubelet[2789]: I0123 01:09:37.237433 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t9k4d" podStartSLOduration=2.237393281 podStartE2EDuration="2.237393281s" podCreationTimestamp="2026-01-23 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:37.23651074 +0000 UTC m=+6.309019627" watchObservedRunningTime="2026-01-23 01:09:37.237393281 +0000 UTC m=+6.309902148" Jan 23 01:09:37.869422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041615663.mount: Deactivated successfully. Jan 23 01:09:38.758476 containerd[1569]: time="2026-01-23T01:09:38.758416939Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:38.759974 containerd[1569]: time="2026-01-23T01:09:38.759665878Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:09:38.761918 containerd[1569]: time="2026-01-23T01:09:38.761832958Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:38.767569 containerd[1569]: time="2026-01-23T01:09:38.767516472Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:38.768984 containerd[1569]: time="2026-01-23T01:09:38.768887498Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.939857265s" Jan 23 01:09:38.768984 containerd[1569]: time="2026-01-23T01:09:38.768965638Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:09:38.775056 containerd[1569]: time="2026-01-23T01:09:38.775003207Z" level=info msg="CreateContainer within sandbox \"33ebd7b46961325fb489bede5420a2d5414014ce8976bba8ed967db4912e17e3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:09:38.789172 containerd[1569]: time="2026-01-23T01:09:38.789107601Z" level=info msg="Container cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:38.798574 containerd[1569]: time="2026-01-23T01:09:38.798497120Z" level=info msg="CreateContainer within sandbox \"33ebd7b46961325fb489bede5420a2d5414014ce8976bba8ed967db4912e17e3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d\"" Jan 23 01:09:38.799715 containerd[1569]: time="2026-01-23T01:09:38.799425110Z" level=info msg="StartContainer for \"cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d\"" Jan 23 01:09:38.801506 containerd[1569]: time="2026-01-23T01:09:38.801454205Z" level=info msg="connecting to shim cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d" address="unix:///run/containerd/s/65810955fafb5fafbe68d42e3b12e1b6624d0f9bbcbe6695a2a70a28b699d400" protocol=ttrpc version=3 Jan 23 01:09:38.837524 systemd[1]: Started cri-containerd-cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d.scope - libcontainer container cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d. Jan 23 01:09:38.894448 containerd[1569]: time="2026-01-23T01:09:38.894391878Z" level=info msg="StartContainer for \"cb9703cbcc4182ecafa8d59ec3a1c6e52285fa6cc81355ba38fa360984d8513d\" returns successfully" Jan 23 01:09:39.240071 kubelet[2789]: I0123 01:09:39.239806 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lm92r" podStartSLOduration=1.296854656 podStartE2EDuration="3.239778932s" podCreationTimestamp="2026-01-23 01:09:36 +0000 UTC" firstStartedPulling="2026-01-23 01:09:36.828438714 +0000 UTC m=+5.900947567" lastFinishedPulling="2026-01-23 01:09:38.771362984 +0000 UTC m=+7.843871843" observedRunningTime="2026-01-23 01:09:39.239304325 +0000 UTC m=+8.311813191" watchObservedRunningTime="2026-01-23 01:09:39.239778932 +0000 UTC m=+8.312287800" Jan 23 01:09:41.433959 update_engine[1544]: I20260123 01:09:41.430959 1544 update_attempter.cc:509] Updating boot flags... Jan 23 01:09:46.715588 sudo[1867]: pam_unix(sudo:session): session closed for user root Jan 23 01:09:46.752942 sshd[1866]: Connection closed by 4.153.228.146 port 48404 Jan 23 01:09:46.752227 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:46.762536 systemd[1]: sshd@6-10.128.0.88:22-4.153.228.146:48404.service: Deactivated successfully. Jan 23 01:09:46.771947 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:09:46.772302 systemd[1]: session-7.scope: Consumed 6.609s CPU time, 229.5M memory peak. Jan 23 01:09:46.779853 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:09:46.782020 systemd-logind[1542]: Removed session 7. Jan 23 01:09:54.372284 systemd[1]: Created slice kubepods-besteffort-pod3fea7210_9fd9_4778_98c6_8e5563b237fc.slice - libcontainer container kubepods-besteffort-pod3fea7210_9fd9_4778_98c6_8e5563b237fc.slice. Jan 23 01:09:54.388864 kubelet[2789]: I0123 01:09:54.388814 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3fea7210-9fd9-4778-98c6-8e5563b237fc-typha-certs\") pod \"calico-typha-586d968675-vspp7\" (UID: \"3fea7210-9fd9-4778-98c6-8e5563b237fc\") " pod="calico-system/calico-typha-586d968675-vspp7" Jan 23 01:09:54.390046 kubelet[2789]: I0123 01:09:54.389549 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmck\" (UniqueName: \"kubernetes.io/projected/3fea7210-9fd9-4778-98c6-8e5563b237fc-kube-api-access-mpmck\") pod \"calico-typha-586d968675-vspp7\" (UID: \"3fea7210-9fd9-4778-98c6-8e5563b237fc\") " pod="calico-system/calico-typha-586d968675-vspp7" Jan 23 01:09:54.390046 kubelet[2789]: I0123 01:09:54.389610 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fea7210-9fd9-4778-98c6-8e5563b237fc-tigera-ca-bundle\") pod \"calico-typha-586d968675-vspp7\" (UID: \"3fea7210-9fd9-4778-98c6-8e5563b237fc\") " pod="calico-system/calico-typha-586d968675-vspp7" Jan 23 01:09:54.490206 kubelet[2789]: I0123 01:09:54.489816 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-flexvol-driver-host\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.492007 kubelet[2789]: I0123 01:09:54.491959 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8431fce5-95d8-49e7-9b5d-274f7ada39da-node-certs\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.492332 kubelet[2789]: I0123 01:09:54.492252 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-cni-log-dir\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.492609 kubelet[2789]: I0123 01:09:54.492470 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-xtables-lock\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.492609 kubelet[2789]: I0123 01:09:54.492556 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85krd\" (UniqueName: \"kubernetes.io/projected/8431fce5-95d8-49e7-9b5d-274f7ada39da-kube-api-access-85krd\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.492833 kubelet[2789]: I0123 01:09:54.492813 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-lib-modules\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.493251 kubelet[2789]: I0123 01:09:54.493223 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-cni-net-dir\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.496067 kubelet[2789]: I0123 01:09:54.495576 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-var-lib-calico\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.496067 kubelet[2789]: I0123 01:09:54.495655 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-cni-bin-dir\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.496067 kubelet[2789]: I0123 01:09:54.495686 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-policysync\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.496067 kubelet[2789]: I0123 01:09:54.495728 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8431fce5-95d8-49e7-9b5d-274f7ada39da-tigera-ca-bundle\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.496067 kubelet[2789]: I0123 01:09:54.495772 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8431fce5-95d8-49e7-9b5d-274f7ada39da-var-run-calico\") pod \"calico-node-88968\" (UID: \"8431fce5-95d8-49e7-9b5d-274f7ada39da\") " pod="calico-system/calico-node-88968" Jan 23 01:09:54.503404 systemd[1]: Created slice kubepods-besteffort-pod8431fce5_95d8_49e7_9b5d_274f7ada39da.slice - libcontainer container kubepods-besteffort-pod8431fce5_95d8_49e7_9b5d_274f7ada39da.slice. Jan 23 01:09:54.602943 kubelet[2789]: E0123 01:09:54.602814 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.603348 kubelet[2789]: W0123 01:09:54.603221 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.604081 kubelet[2789]: E0123 01:09:54.604047 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.611130 kubelet[2789]: E0123 01:09:54.611098 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.611320 kubelet[2789]: W0123 01:09:54.611299 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.613004 kubelet[2789]: E0123 01:09:54.612967 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.641358 kubelet[2789]: E0123 01:09:54.641320 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.641569 kubelet[2789]: W0123 01:09:54.641541 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.641717 kubelet[2789]: E0123 01:09:54.641695 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.654800 kubelet[2789]: E0123 01:09:54.654573 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:09:54.681983 containerd[1569]: time="2026-01-23T01:09:54.681886604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-586d968675-vspp7,Uid:3fea7210-9fd9-4778-98c6-8e5563b237fc,Namespace:calico-system,Attempt:0,}" Jan 23 01:09:54.691455 kubelet[2789]: E0123 01:09:54.691193 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.691455 kubelet[2789]: W0123 01:09:54.691231 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.691455 kubelet[2789]: E0123 01:09:54.691264 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.692782 kubelet[2789]: E0123 01:09:54.692122 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.692782 kubelet[2789]: W0123 01:09:54.692154 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.692782 kubelet[2789]: E0123 01:09:54.692189 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.696065 kubelet[2789]: E0123 01:09:54.696031 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.696253 kubelet[2789]: W0123 01:09:54.696227 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.696371 kubelet[2789]: E0123 01:09:54.696351 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.696944 kubelet[2789]: E0123 01:09:54.696832 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.697708 kubelet[2789]: W0123 01:09:54.697186 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.697708 kubelet[2789]: E0123 01:09:54.697223 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.702340 kubelet[2789]: E0123 01:09:54.702226 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.703197 kubelet[2789]: W0123 01:09:54.702868 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.703197 kubelet[2789]: E0123 01:09:54.703024 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.705363 kubelet[2789]: E0123 01:09:54.705150 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.705363 kubelet[2789]: W0123 01:09:54.705175 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.705363 kubelet[2789]: E0123 01:09:54.705203 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.706593 kubelet[2789]: E0123 01:09:54.706006 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.706593 kubelet[2789]: W0123 01:09:54.706171 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.706593 kubelet[2789]: E0123 01:09:54.706202 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.710179 kubelet[2789]: E0123 01:09:54.710147 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.714950 kubelet[2789]: W0123 01:09:54.710691 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.714950 kubelet[2789]: E0123 01:09:54.710735 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.723316 kubelet[2789]: E0123 01:09:54.722605 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.723316 kubelet[2789]: W0123 01:09:54.722658 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.723316 kubelet[2789]: E0123 01:09:54.722692 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.725938 kubelet[2789]: E0123 01:09:54.724419 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.725938 kubelet[2789]: W0123 01:09:54.724465 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.725938 kubelet[2789]: E0123 01:09:54.724495 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.732700 kubelet[2789]: E0123 01:09:54.732262 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.732700 kubelet[2789]: W0123 01:09:54.732300 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.732700 kubelet[2789]: E0123 01:09:54.732329 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.733003 kubelet[2789]: E0123 01:09:54.732876 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.733003 kubelet[2789]: W0123 01:09:54.732895 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.733003 kubelet[2789]: E0123 01:09:54.732935 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.734176 kubelet[2789]: E0123 01:09:54.734148 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.734313 kubelet[2789]: W0123 01:09:54.734182 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.734313 kubelet[2789]: E0123 01:09:54.734217 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.734507 kubelet[2789]: E0123 01:09:54.734487 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.734595 kubelet[2789]: W0123 01:09:54.734515 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.734595 kubelet[2789]: E0123 01:09:54.734537 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.736935 kubelet[2789]: E0123 01:09:54.735129 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.736935 kubelet[2789]: W0123 01:09:54.735162 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.736935 kubelet[2789]: E0123 01:09:54.735180 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.736935 kubelet[2789]: E0123 01:09:54.735758 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.736935 kubelet[2789]: W0123 01:09:54.735773 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.736935 kubelet[2789]: E0123 01:09:54.735792 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.740221 kubelet[2789]: E0123 01:09:54.740052 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.740221 kubelet[2789]: W0123 01:09:54.740074 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.740221 kubelet[2789]: E0123 01:09:54.740097 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.741200 kubelet[2789]: E0123 01:09:54.741004 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.742020 kubelet[2789]: W0123 01:09:54.741535 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.742020 kubelet[2789]: E0123 01:09:54.741567 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.743319 kubelet[2789]: E0123 01:09:54.743297 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.743879 kubelet[2789]: W0123 01:09:54.743607 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.743879 kubelet[2789]: E0123 01:09:54.743643 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.745825 kubelet[2789]: E0123 01:09:54.745707 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.746073 kubelet[2789]: W0123 01:09:54.746048 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.746576 kubelet[2789]: E0123 01:09:54.746242 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.748535 kubelet[2789]: E0123 01:09:54.748506 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.749456 kubelet[2789]: W0123 01:09:54.748678 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.749456 kubelet[2789]: E0123 01:09:54.748711 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.749456 kubelet[2789]: I0123 01:09:54.748762 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e-kubelet-dir\") pod \"csi-node-driver-47s7m\" (UID: \"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e\") " pod="calico-system/csi-node-driver-47s7m" Jan 23 01:09:54.749930 kubelet[2789]: E0123 01:09:54.749866 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.749930 kubelet[2789]: W0123 01:09:54.749889 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.751418 kubelet[2789]: E0123 01:09:54.751391 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.751629 kubelet[2789]: I0123 01:09:54.751569 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e-registration-dir\") pod \"csi-node-driver-47s7m\" (UID: \"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e\") " pod="calico-system/csi-node-driver-47s7m" Jan 23 01:09:54.752252 kubelet[2789]: E0123 01:09:54.752229 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.752598 kubelet[2789]: W0123 01:09:54.752368 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.752598 kubelet[2789]: E0123 01:09:54.752408 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.752598 kubelet[2789]: I0123 01:09:54.752446 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e-socket-dir\") pod \"csi-node-driver-47s7m\" (UID: \"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e\") " pod="calico-system/csi-node-driver-47s7m" Jan 23 01:09:54.753346 kubelet[2789]: E0123 01:09:54.753256 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.753346 kubelet[2789]: W0123 01:09:54.753284 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.753346 kubelet[2789]: E0123 01:09:54.753305 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.753704 kubelet[2789]: I0123 01:09:54.753557 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4fjc\" (UniqueName: \"kubernetes.io/projected/88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e-kube-api-access-g4fjc\") pod \"csi-node-driver-47s7m\" (UID: \"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e\") " pod="calico-system/csi-node-driver-47s7m" Jan 23 01:09:54.756826 kubelet[2789]: E0123 01:09:54.755170 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.756826 kubelet[2789]: W0123 01:09:54.755189 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.756826 kubelet[2789]: E0123 01:09:54.755211 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.756826 kubelet[2789]: I0123 01:09:54.755242 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e-varrun\") pod \"csi-node-driver-47s7m\" (UID: \"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e\") " pod="calico-system/csi-node-driver-47s7m" Jan 23 01:09:54.762645 kubelet[2789]: E0123 01:09:54.760861 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.762645 kubelet[2789]: W0123 01:09:54.760888 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.762645 kubelet[2789]: E0123 01:09:54.760935 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.762645 kubelet[2789]: E0123 01:09:54.762011 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.762645 kubelet[2789]: W0123 01:09:54.762030 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.762645 kubelet[2789]: E0123 01:09:54.762053 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.762645 kubelet[2789]: E0123 01:09:54.762469 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.762645 kubelet[2789]: W0123 01:09:54.762486 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.762645 kubelet[2789]: E0123 01:09:54.762508 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.764079 kubelet[2789]: E0123 01:09:54.764054 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.764370 kubelet[2789]: W0123 01:09:54.764216 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768019 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.769411 kubelet[2789]: W0123 01:09:54.768046 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768407 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.769411 kubelet[2789]: W0123 01:09:54.768423 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768445 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768463 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768484 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768845 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.769411 kubelet[2789]: W0123 01:09:54.768861 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.769411 kubelet[2789]: E0123 01:09:54.768878 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.771036 kubelet[2789]: E0123 01:09:54.771011 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.771208 kubelet[2789]: W0123 01:09:54.771176 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.771316 kubelet[2789]: E0123 01:09:54.771299 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.774975 kubelet[2789]: E0123 01:09:54.773881 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.774975 kubelet[2789]: W0123 01:09:54.774212 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.774975 kubelet[2789]: E0123 01:09:54.774241 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.775546 kubelet[2789]: E0123 01:09:54.775522 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.775546 kubelet[2789]: W0123 01:09:54.775546 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.775685 kubelet[2789]: E0123 01:09:54.775564 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.791350 containerd[1569]: time="2026-01-23T01:09:54.791228582Z" level=info msg="connecting to shim 431193880ee78bd02b652688977d80d54c1d5f66fe63c101cc6afc4a1f51a176" address="unix:///run/containerd/s/3b61f916ffe682afd85bcd95ac26dfce2348a074255d64b59665e90c0db5bd8c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:54.842380 containerd[1569]: time="2026-01-23T01:09:54.842333263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-88968,Uid:8431fce5-95d8-49e7-9b5d-274f7ada39da,Namespace:calico-system,Attempt:0,}" Jan 23 01:09:54.850622 systemd[1]: Started cri-containerd-431193880ee78bd02b652688977d80d54c1d5f66fe63c101cc6afc4a1f51a176.scope - libcontainer container 431193880ee78bd02b652688977d80d54c1d5f66fe63c101cc6afc4a1f51a176. Jan 23 01:09:54.856352 kubelet[2789]: E0123 01:09:54.856299 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.856352 kubelet[2789]: W0123 01:09:54.856336 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.857420 kubelet[2789]: E0123 01:09:54.856366 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.857420 kubelet[2789]: E0123 01:09:54.857211 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.857420 kubelet[2789]: W0123 01:09:54.857233 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.857420 kubelet[2789]: E0123 01:09:54.857296 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.858561 kubelet[2789]: E0123 01:09:54.857857 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.858561 kubelet[2789]: W0123 01:09:54.857936 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.858561 kubelet[2789]: E0123 01:09:54.858032 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.861032 kubelet[2789]: E0123 01:09:54.858566 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.861032 kubelet[2789]: W0123 01:09:54.858581 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.861032 kubelet[2789]: E0123 01:09:54.858631 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.861471 kubelet[2789]: E0123 01:09:54.861446 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.861471 kubelet[2789]: W0123 01:09:54.861471 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.861683 kubelet[2789]: E0123 01:09:54.861491 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.863854 kubelet[2789]: E0123 01:09:54.863655 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.863854 kubelet[2789]: W0123 01:09:54.863678 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.863854 kubelet[2789]: E0123 01:09:54.863706 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.865215 kubelet[2789]: E0123 01:09:54.865021 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.865215 kubelet[2789]: W0123 01:09:54.865182 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.867293 kubelet[2789]: E0123 01:09:54.867227 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.868444 kubelet[2789]: E0123 01:09:54.868288 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.868444 kubelet[2789]: W0123 01:09:54.868310 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.869391 kubelet[2789]: E0123 01:09:54.869362 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.871539 kubelet[2789]: E0123 01:09:54.869638 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.871681 kubelet[2789]: W0123 01:09:54.871660 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.872650 kubelet[2789]: E0123 01:09:54.872601 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.873143 kubelet[2789]: E0123 01:09:54.872848 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.873763 kubelet[2789]: W0123 01:09:54.873612 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.873763 kubelet[2789]: E0123 01:09:54.873726 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.874794 kubelet[2789]: E0123 01:09:54.874736 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.875144 kubelet[2789]: W0123 01:09:54.875082 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.876415 kubelet[2789]: E0123 01:09:54.876368 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.876598 kubelet[2789]: E0123 01:09:54.876581 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.876721 kubelet[2789]: W0123 01:09:54.876690 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.876926 kubelet[2789]: E0123 01:09:54.876881 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.877691 kubelet[2789]: E0123 01:09:54.877647 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.877691 kubelet[2789]: W0123 01:09:54.877668 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.879269 kubelet[2789]: E0123 01:09:54.879239 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.880636 kubelet[2789]: E0123 01:09:54.880222 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.880636 kubelet[2789]: W0123 01:09:54.880242 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.880636 kubelet[2789]: E0123 01:09:54.880339 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.880636 kubelet[2789]: E0123 01:09:54.880596 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.880636 kubelet[2789]: W0123 01:09:54.880610 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.882785 kubelet[2789]: E0123 01:09:54.881139 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.883072 kubelet[2789]: E0123 01:09:54.883043 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.883341 kubelet[2789]: W0123 01:09:54.883202 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.883571 kubelet[2789]: E0123 01:09:54.883545 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.887022 kubelet[2789]: E0123 01:09:54.885200 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.887022 kubelet[2789]: W0123 01:09:54.885222 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.887022 kubelet[2789]: E0123 01:09:54.885425 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.887022 kubelet[2789]: E0123 01:09:54.886111 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.887022 kubelet[2789]: W0123 01:09:54.886127 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.887022 kubelet[2789]: E0123 01:09:54.886345 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.887022 kubelet[2789]: E0123 01:09:54.886980 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.887022 kubelet[2789]: W0123 01:09:54.886996 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.888265 kubelet[2789]: E0123 01:09:54.888026 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.889152 kubelet[2789]: E0123 01:09:54.889135 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.889279 kubelet[2789]: W0123 01:09:54.889249 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.889570 kubelet[2789]: E0123 01:09:54.889468 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.891145 kubelet[2789]: E0123 01:09:54.891117 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.892116 kubelet[2789]: W0123 01:09:54.891292 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.892950 kubelet[2789]: E0123 01:09:54.892356 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.894042 kubelet[2789]: E0123 01:09:54.894005 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.894274 kubelet[2789]: W0123 01:09:54.894135 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.894427 kubelet[2789]: E0123 01:09:54.894402 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.896165 kubelet[2789]: E0123 01:09:54.895503 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.896165 kubelet[2789]: W0123 01:09:54.895673 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.897570 kubelet[2789]: E0123 01:09:54.897122 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.898651 kubelet[2789]: E0123 01:09:54.898466 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.899488 kubelet[2789]: W0123 01:09:54.899052 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.901131 kubelet[2789]: E0123 01:09:54.901054 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.902088 kubelet[2789]: E0123 01:09:54.901853 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.902088 kubelet[2789]: W0123 01:09:54.901872 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.902088 kubelet[2789]: E0123 01:09:54.901893 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.910119 containerd[1569]: time="2026-01-23T01:09:54.908106347Z" level=info msg="connecting to shim 4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6" address="unix:///run/containerd/s/13448cc614ae36621e3570bb56783850e0deadb18c373699e13111f0ef38ea6f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:54.942642 kubelet[2789]: E0123 01:09:54.942600 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:09:54.942838 kubelet[2789]: W0123 01:09:54.942654 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:09:54.942838 kubelet[2789]: E0123 01:09:54.942687 2789 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:09:54.987226 systemd[1]: Started cri-containerd-4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6.scope - libcontainer container 4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6. Jan 23 01:09:55.111807 containerd[1569]: time="2026-01-23T01:09:55.111714895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-88968,Uid:8431fce5-95d8-49e7-9b5d-274f7ada39da,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\"" Jan 23 01:09:55.116593 containerd[1569]: time="2026-01-23T01:09:55.115555145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:09:55.133504 containerd[1569]: time="2026-01-23T01:09:55.133457943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-586d968675-vspp7,Uid:3fea7210-9fd9-4778-98c6-8e5563b237fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"431193880ee78bd02b652688977d80d54c1d5f66fe63c101cc6afc4a1f51a176\"" Jan 23 01:09:56.050281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518344019.mount: Deactivated successfully. Jan 23 01:09:56.137869 kubelet[2789]: E0123 01:09:56.137719 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:09:56.177159 containerd[1569]: time="2026-01-23T01:09:56.177068201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:56.178440 containerd[1569]: time="2026-01-23T01:09:56.177962918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 01:09:56.181401 containerd[1569]: time="2026-01-23T01:09:56.181354960Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:56.182860 containerd[1569]: time="2026-01-23T01:09:56.182820854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:56.183806 containerd[1569]: time="2026-01-23T01:09:56.183759331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.06815483s" Jan 23 01:09:56.183897 containerd[1569]: time="2026-01-23T01:09:56.183811287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:09:56.185710 containerd[1569]: time="2026-01-23T01:09:56.185675011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:09:56.187447 containerd[1569]: time="2026-01-23T01:09:56.187412224Z" level=info msg="CreateContainer within sandbox \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:09:56.206094 containerd[1569]: time="2026-01-23T01:09:56.206006510Z" level=info msg="Container 17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:56.220236 containerd[1569]: time="2026-01-23T01:09:56.220163407Z" level=info msg="CreateContainer within sandbox \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01\"" Jan 23 01:09:56.221382 containerd[1569]: time="2026-01-23T01:09:56.221323648Z" level=info msg="StartContainer for \"17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01\"" Jan 23 01:09:56.224640 containerd[1569]: time="2026-01-23T01:09:56.224593899Z" level=info msg="connecting to shim 17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01" address="unix:///run/containerd/s/13448cc614ae36621e3570bb56783850e0deadb18c373699e13111f0ef38ea6f" protocol=ttrpc version=3 Jan 23 01:09:56.262213 systemd[1]: Started cri-containerd-17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01.scope - libcontainer container 17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01. Jan 23 01:09:56.369477 containerd[1569]: time="2026-01-23T01:09:56.368801465Z" level=info msg="StartContainer for \"17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01\" returns successfully" Jan 23 01:09:56.386362 systemd[1]: cri-containerd-17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01.scope: Deactivated successfully. Jan 23 01:09:56.393162 containerd[1569]: time="2026-01-23T01:09:56.393112450Z" level=info msg="received container exit event container_id:\"17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01\" id:\"17083d0edd898a5bd04f90c5d0306ce5dbd46ca54a5aead2be021730f4337f01\" pid:3405 exited_at:{seconds:1769130596 nanos:392388084}" Jan 23 01:09:58.137383 kubelet[2789]: E0123 01:09:58.136510 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:09:58.660391 containerd[1569]: time="2026-01-23T01:09:58.660320876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:58.661968 containerd[1569]: time="2026-01-23T01:09:58.661727584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 23 01:09:58.663456 containerd[1569]: time="2026-01-23T01:09:58.663415399Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:58.666881 containerd[1569]: time="2026-01-23T01:09:58.666672605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:09:58.668027 containerd[1569]: time="2026-01-23T01:09:58.667987500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.482237548s" Jan 23 01:09:58.668206 containerd[1569]: time="2026-01-23T01:09:58.668178069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:09:58.670119 containerd[1569]: time="2026-01-23T01:09:58.669483303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:09:58.699020 containerd[1569]: time="2026-01-23T01:09:58.698968907Z" level=info msg="CreateContainer within sandbox \"431193880ee78bd02b652688977d80d54c1d5f66fe63c101cc6afc4a1f51a176\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:09:58.714342 containerd[1569]: time="2026-01-23T01:09:58.714081298Z" level=info msg="Container 234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:58.737435 containerd[1569]: time="2026-01-23T01:09:58.737359017Z" level=info msg="CreateContainer within sandbox \"431193880ee78bd02b652688977d80d54c1d5f66fe63c101cc6afc4a1f51a176\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20\"" Jan 23 01:09:58.738442 containerd[1569]: time="2026-01-23T01:09:58.738371186Z" level=info msg="StartContainer for \"234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20\"" Jan 23 01:09:58.740652 containerd[1569]: time="2026-01-23T01:09:58.740596392Z" level=info msg="connecting to shim 234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20" address="unix:///run/containerd/s/3b61f916ffe682afd85bcd95ac26dfce2348a074255d64b59665e90c0db5bd8c" protocol=ttrpc version=3 Jan 23 01:09:58.776244 systemd[1]: Started cri-containerd-234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20.scope - libcontainer container 234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20. Jan 23 01:09:58.856991 containerd[1569]: time="2026-01-23T01:09:58.856927859Z" level=info msg="StartContainer for \"234118faa66a536471cafd85f415647d2c26cb228a61fde96f7fd9c9dba4dc20\" returns successfully" Jan 23 01:10:00.139467 kubelet[2789]: E0123 01:10:00.137141 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:00.311594 kubelet[2789]: I0123 01:10:00.310808 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:10:01.837065 containerd[1569]: time="2026-01-23T01:10:01.836995814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:01.838749 containerd[1569]: time="2026-01-23T01:10:01.838684361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:10:01.840212 containerd[1569]: time="2026-01-23T01:10:01.840143805Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:01.845131 containerd[1569]: time="2026-01-23T01:10:01.845089824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:01.846128 containerd[1569]: time="2026-01-23T01:10:01.845951792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.176429512s" Jan 23 01:10:01.846128 containerd[1569]: time="2026-01-23T01:10:01.845996612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:10:01.850194 containerd[1569]: time="2026-01-23T01:10:01.850135548Z" level=info msg="CreateContainer within sandbox \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:10:01.864942 containerd[1569]: time="2026-01-23T01:10:01.864111328Z" level=info msg="Container 51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:01.878752 containerd[1569]: time="2026-01-23T01:10:01.878678715Z" level=info msg="CreateContainer within sandbox \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d\"" Jan 23 01:10:01.880307 containerd[1569]: time="2026-01-23T01:10:01.879439899Z" level=info msg="StartContainer for \"51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d\"" Jan 23 01:10:01.882307 containerd[1569]: time="2026-01-23T01:10:01.882255054Z" level=info msg="connecting to shim 51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d" address="unix:///run/containerd/s/13448cc614ae36621e3570bb56783850e0deadb18c373699e13111f0ef38ea6f" protocol=ttrpc version=3 Jan 23 01:10:01.911171 systemd[1]: Started cri-containerd-51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d.scope - libcontainer container 51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d. Jan 23 01:10:02.028289 containerd[1569]: time="2026-01-23T01:10:02.028109517Z" level=info msg="StartContainer for \"51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d\" returns successfully" Jan 23 01:10:02.137228 kubelet[2789]: E0123 01:10:02.137165 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:02.343743 kubelet[2789]: I0123 01:10:02.342880 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-586d968675-vspp7" podStartSLOduration=4.810850415 podStartE2EDuration="8.342854765s" podCreationTimestamp="2026-01-23 01:09:54 +0000 UTC" firstStartedPulling="2026-01-23 01:09:55.13727428 +0000 UTC m=+24.209783123" lastFinishedPulling="2026-01-23 01:09:58.669278608 +0000 UTC m=+27.741787473" observedRunningTime="2026-01-23 01:09:59.359317314 +0000 UTC m=+28.431826216" watchObservedRunningTime="2026-01-23 01:10:02.342854765 +0000 UTC m=+31.415363634" Jan 23 01:10:03.104721 containerd[1569]: time="2026-01-23T01:10:03.104660766Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:10:03.110204 systemd[1]: cri-containerd-51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d.scope: Deactivated successfully. Jan 23 01:10:03.110824 systemd[1]: cri-containerd-51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d.scope: Consumed 710ms CPU time, 190.9M memory peak, 171.3M written to disk. Jan 23 01:10:03.115404 containerd[1569]: time="2026-01-23T01:10:03.114898007Z" level=info msg="received container exit event container_id:\"51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d\" id:\"51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d\" pid:3507 exited_at:{seconds:1769130603 nanos:113949604}" Jan 23 01:10:03.153557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ed880da57377ce08626d98b0c9568a83bfea525e29e8187295a9317e26c81d-rootfs.mount: Deactivated successfully. Jan 23 01:10:03.214630 kubelet[2789]: I0123 01:10:03.214479 2789 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:10:03.269177 kubelet[2789]: I0123 01:10:03.269110 2789 status_manager.go:890] "Failed to get status for pod" podUID="5ce5d673-fa45-45a7-8888-a98708c476b0" pod="kube-system/coredns-668d6bf9bc-l6md5" err="pods \"coredns-668d6bf9bc-l6md5\" is forbidden: User \"system:node:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' and this object" Jan 23 01:10:03.269471 kubelet[2789]: W0123 01:10:03.269388 2789 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' and this object Jan 23 01:10:03.269471 kubelet[2789]: E0123 01:10:03.269438 2789 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' and this object" logger="UnhandledError" Jan 23 01:10:03.287233 systemd[1]: Created slice kubepods-burstable-pod5ce5d673_fa45_45a7_8888_a98708c476b0.slice - libcontainer container kubepods-burstable-pod5ce5d673_fa45_45a7_8888_a98708c476b0.slice. Jan 23 01:10:03.302514 systemd[1]: Created slice kubepods-besteffort-pod064b559c_bfe9_4534_b533_689a0c2791a2.slice - libcontainer container kubepods-besteffort-pod064b559c_bfe9_4534_b533_689a0c2791a2.slice. Jan 23 01:10:03.318513 systemd[1]: Created slice kubepods-besteffort-podae9746a2_a617_45a0_ab4a_8c3ff369f251.slice - libcontainer container kubepods-besteffort-podae9746a2_a617_45a0_ab4a_8c3ff369f251.slice. Jan 23 01:10:03.332030 systemd[1]: Created slice kubepods-besteffort-pod107c88ad_a23e_4977_926b_0153678bb502.slice - libcontainer container kubepods-besteffort-pod107c88ad_a23e_4977_926b_0153678bb502.slice. Jan 23 01:10:03.332263 kubelet[2789]: I0123 01:10:03.332229 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/107c88ad-a23e-4977-926b-0153678bb502-calico-apiserver-certs\") pod \"calico-apiserver-5d785d7599-c6hvd\" (UID: \"107c88ad-a23e-4977-926b-0153678bb502\") " pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" Jan 23 01:10:03.332367 kubelet[2789]: I0123 01:10:03.332283 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a633b2-3fb5-43ea-815f-68be219af473-config-volume\") pod \"coredns-668d6bf9bc-8qp5z\" (UID: \"54a633b2-3fb5-43ea-815f-68be219af473\") " pod="kube-system/coredns-668d6bf9bc-8qp5z" Jan 23 01:10:03.332367 kubelet[2789]: I0123 01:10:03.332331 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jz9\" (UniqueName: \"kubernetes.io/projected/f817b277-502c-42fa-96de-77b7a2b164dc-kube-api-access-r5jz9\") pod \"goldmane-666569f655-rx4sc\" (UID: \"f817b277-502c-42fa-96de-77b7a2b164dc\") " pod="calico-system/goldmane-666569f655-rx4sc" Jan 23 01:10:03.332476 kubelet[2789]: I0123 01:10:03.332359 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae9746a2-a617-45a0-ab4a-8c3ff369f251-calico-apiserver-certs\") pod \"calico-apiserver-5d785d7599-4z2jg\" (UID: \"ae9746a2-a617-45a0-ab4a-8c3ff369f251\") " pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" Jan 23 01:10:03.332476 kubelet[2789]: I0123 01:10:03.332416 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrwl\" (UniqueName: \"kubernetes.io/projected/ae9746a2-a617-45a0-ab4a-8c3ff369f251-kube-api-access-bdrwl\") pod \"calico-apiserver-5d785d7599-4z2jg\" (UID: \"ae9746a2-a617-45a0-ab4a-8c3ff369f251\") " pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" Jan 23 01:10:03.332476 kubelet[2789]: I0123 01:10:03.332444 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrt7\" (UniqueName: \"kubernetes.io/projected/54a633b2-3fb5-43ea-815f-68be219af473-kube-api-access-wvrt7\") pod \"coredns-668d6bf9bc-8qp5z\" (UID: \"54a633b2-3fb5-43ea-815f-68be219af473\") " pod="kube-system/coredns-668d6bf9bc-8qp5z" Jan 23 01:10:03.332642 kubelet[2789]: I0123 01:10:03.332478 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-backend-key-pair\") pod \"whisker-f7457f998-hbh8m\" (UID: \"98f87fa8-1efc-406b-89d8-4c32839b99d2\") " pod="calico-system/whisker-f7457f998-hbh8m" Jan 23 01:10:03.332642 kubelet[2789]: I0123 01:10:03.332535 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwh9\" (UniqueName: \"kubernetes.io/projected/107c88ad-a23e-4977-926b-0153678bb502-kube-api-access-2rwh9\") pod \"calico-apiserver-5d785d7599-c6hvd\" (UID: \"107c88ad-a23e-4977-926b-0153678bb502\") " pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" Jan 23 01:10:03.332642 kubelet[2789]: I0123 01:10:03.332565 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f817b277-502c-42fa-96de-77b7a2b164dc-goldmane-key-pair\") pod \"goldmane-666569f655-rx4sc\" (UID: \"f817b277-502c-42fa-96de-77b7a2b164dc\") " pod="calico-system/goldmane-666569f655-rx4sc" Jan 23 01:10:03.332642 kubelet[2789]: I0123 01:10:03.332598 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwzj\" (UniqueName: \"kubernetes.io/projected/98f87fa8-1efc-406b-89d8-4c32839b99d2-kube-api-access-wxwzj\") pod \"whisker-f7457f998-hbh8m\" (UID: \"98f87fa8-1efc-406b-89d8-4c32839b99d2\") " pod="calico-system/whisker-f7457f998-hbh8m" Jan 23 01:10:03.332642 kubelet[2789]: I0123 01:10:03.332628 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xg9\" (UniqueName: \"kubernetes.io/projected/5ce5d673-fa45-45a7-8888-a98708c476b0-kube-api-access-h5xg9\") pod \"coredns-668d6bf9bc-l6md5\" (UID: \"5ce5d673-fa45-45a7-8888-a98708c476b0\") " pod="kube-system/coredns-668d6bf9bc-l6md5" Jan 23 01:10:03.332886 kubelet[2789]: I0123 01:10:03.332665 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/064b559c-bfe9-4534-b533-689a0c2791a2-tigera-ca-bundle\") pod \"calico-kube-controllers-7456665ddf-rb6bt\" (UID: \"064b559c-bfe9-4534-b533-689a0c2791a2\") " pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" Jan 23 01:10:03.332886 kubelet[2789]: I0123 01:10:03.332699 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-ca-bundle\") pod \"whisker-f7457f998-hbh8m\" (UID: \"98f87fa8-1efc-406b-89d8-4c32839b99d2\") " pod="calico-system/whisker-f7457f998-hbh8m" Jan 23 01:10:03.332886 kubelet[2789]: I0123 01:10:03.332740 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcjmq\" (UniqueName: \"kubernetes.io/projected/064b559c-bfe9-4534-b533-689a0c2791a2-kube-api-access-mcjmq\") pod \"calico-kube-controllers-7456665ddf-rb6bt\" (UID: \"064b559c-bfe9-4534-b533-689a0c2791a2\") " pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" Jan 23 01:10:03.332886 kubelet[2789]: I0123 01:10:03.332785 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f817b277-502c-42fa-96de-77b7a2b164dc-goldmane-ca-bundle\") pod \"goldmane-666569f655-rx4sc\" (UID: \"f817b277-502c-42fa-96de-77b7a2b164dc\") " pod="calico-system/goldmane-666569f655-rx4sc" Jan 23 01:10:03.332886 kubelet[2789]: I0123 01:10:03.332822 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f817b277-502c-42fa-96de-77b7a2b164dc-config\") pod \"goldmane-666569f655-rx4sc\" (UID: \"f817b277-502c-42fa-96de-77b7a2b164dc\") " pod="calico-system/goldmane-666569f655-rx4sc" Jan 23 01:10:03.334298 kubelet[2789]: I0123 01:10:03.332851 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ce5d673-fa45-45a7-8888-a98708c476b0-config-volume\") pod \"coredns-668d6bf9bc-l6md5\" (UID: \"5ce5d673-fa45-45a7-8888-a98708c476b0\") " pod="kube-system/coredns-668d6bf9bc-l6md5" Jan 23 01:10:03.356554 systemd[1]: Created slice kubepods-burstable-pod54a633b2_3fb5_43ea_815f_68be219af473.slice - libcontainer container kubepods-burstable-pod54a633b2_3fb5_43ea_815f_68be219af473.slice. Jan 23 01:10:03.367821 systemd[1]: Created slice kubepods-besteffort-pod98f87fa8_1efc_406b_89d8_4c32839b99d2.slice - libcontainer container kubepods-besteffort-pod98f87fa8_1efc_406b_89d8_4c32839b99d2.slice. Jan 23 01:10:03.387209 systemd[1]: Created slice kubepods-besteffort-podf817b277_502c_42fa_96de_77b7a2b164dc.slice - libcontainer container kubepods-besteffort-podf817b277_502c_42fa_96de_77b7a2b164dc.slice. Jan 23 01:10:03.613022 containerd[1569]: time="2026-01-23T01:10:03.612064829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7456665ddf-rb6bt,Uid:064b559c-bfe9-4534-b533-689a0c2791a2,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:03.628704 containerd[1569]: time="2026-01-23T01:10:03.628629827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-4z2jg,Uid:ae9746a2-a617-45a0-ab4a-8c3ff369f251,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:10:03.639512 containerd[1569]: time="2026-01-23T01:10:03.639364666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-c6hvd,Uid:107c88ad-a23e-4977-926b-0153678bb502,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:10:03.680512 containerd[1569]: time="2026-01-23T01:10:03.680455478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7457f998-hbh8m,Uid:98f87fa8-1efc-406b-89d8-4c32839b99d2,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:03.700001 containerd[1569]: time="2026-01-23T01:10:03.699929609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rx4sc,Uid:f817b277-502c-42fa-96de-77b7a2b164dc,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:04.150424 systemd[1]: Created slice kubepods-besteffort-pod88a4fdb4_4d5f_4b12_aeb4_00fd6737d18e.slice - libcontainer container kubepods-besteffort-pod88a4fdb4_4d5f_4b12_aeb4_00fd6737d18e.slice. Jan 23 01:10:04.181554 containerd[1569]: time="2026-01-23T01:10:04.181061624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47s7m,Uid:88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:04.271473 containerd[1569]: time="2026-01-23T01:10:04.271390168Z" level=error msg="Failed to destroy network for sandbox \"0f4554e73c36a32de38fbff58d8fbf28d31799ceeb088e5189bab041570b79d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.277929 systemd[1]: run-netns-cni\x2d79bec35d\x2d9ba6\x2da024\x2d5476\x2d52cc5b6de75b.mount: Deactivated successfully. Jan 23 01:10:04.283107 containerd[1569]: time="2026-01-23T01:10:04.282374487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rx4sc,Uid:f817b277-502c-42fa-96de-77b7a2b164dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f4554e73c36a32de38fbff58d8fbf28d31799ceeb088e5189bab041570b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.283315 kubelet[2789]: E0123 01:10:04.282663 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f4554e73c36a32de38fbff58d8fbf28d31799ceeb088e5189bab041570b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.283315 kubelet[2789]: E0123 01:10:04.282749 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f4554e73c36a32de38fbff58d8fbf28d31799ceeb088e5189bab041570b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rx4sc" Jan 23 01:10:04.283315 kubelet[2789]: E0123 01:10:04.282787 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f4554e73c36a32de38fbff58d8fbf28d31799ceeb088e5189bab041570b79d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rx4sc" Jan 23 01:10:04.284438 kubelet[2789]: E0123 01:10:04.282857 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rx4sc_calico-system(f817b277-502c-42fa-96de-77b7a2b164dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rx4sc_calico-system(f817b277-502c-42fa-96de-77b7a2b164dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f4554e73c36a32de38fbff58d8fbf28d31799ceeb088e5189bab041570b79d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:10:04.355739 containerd[1569]: time="2026-01-23T01:10:04.355685228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:10:04.421113 containerd[1569]: time="2026-01-23T01:10:04.420150658Z" level=error msg="Failed to destroy network for sandbox \"aa42de8bf320ec6c789857295fcb2896edd8474cf0595246b7094d6bb656f7b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.427003 systemd[1]: run-netns-cni\x2d58a974b6\x2d8e3e\x2dcc55\x2d7b04\x2d952d5235e150.mount: Deactivated successfully. Jan 23 01:10:04.429110 containerd[1569]: time="2026-01-23T01:10:04.428718552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7456665ddf-rb6bt,Uid:064b559c-bfe9-4534-b533-689a0c2791a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa42de8bf320ec6c789857295fcb2896edd8474cf0595246b7094d6bb656f7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.429297 kubelet[2789]: E0123 01:10:04.429131 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa42de8bf320ec6c789857295fcb2896edd8474cf0595246b7094d6bb656f7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.429297 kubelet[2789]: E0123 01:10:04.429207 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa42de8bf320ec6c789857295fcb2896edd8474cf0595246b7094d6bb656f7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" Jan 23 01:10:04.429297 kubelet[2789]: E0123 01:10:04.429244 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa42de8bf320ec6c789857295fcb2896edd8474cf0595246b7094d6bb656f7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" Jan 23 01:10:04.429472 kubelet[2789]: E0123 01:10:04.429302 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7456665ddf-rb6bt_calico-system(064b559c-bfe9-4534-b533-689a0c2791a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7456665ddf-rb6bt_calico-system(064b559c-bfe9-4534-b533-689a0c2791a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa42de8bf320ec6c789857295fcb2896edd8474cf0595246b7094d6bb656f7b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:10:04.444630 containerd[1569]: time="2026-01-23T01:10:04.444511273Z" level=error msg="Failed to destroy network for sandbox \"80795d457a6498c0dc58f41a0a5447f339f69f599199b456f377443079da030f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.450211 kubelet[2789]: E0123 01:10:04.450069 2789 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:10:04.450516 kubelet[2789]: E0123 01:10:04.450290 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54a633b2-3fb5-43ea-815f-68be219af473-config-volume podName:54a633b2-3fb5-43ea-815f-68be219af473 nodeName:}" failed. No retries permitted until 2026-01-23 01:10:04.950231659 +0000 UTC m=+34.022740525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/54a633b2-3fb5-43ea-815f-68be219af473-config-volume") pod "coredns-668d6bf9bc-8qp5z" (UID: "54a633b2-3fb5-43ea-815f-68be219af473") : failed to sync configmap cache: timed out waiting for the condition Jan 23 01:10:04.451916 systemd[1]: run-netns-cni\x2d748bec97\x2db0cb\x2d4e72\x2d539f\x2d766ff591c826.mount: Deactivated successfully. Jan 23 01:10:04.456110 containerd[1569]: time="2026-01-23T01:10:04.455977579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47s7m,Uid:88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80795d457a6498c0dc58f41a0a5447f339f69f599199b456f377443079da030f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.459181 kubelet[2789]: E0123 01:10:04.459123 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80795d457a6498c0dc58f41a0a5447f339f69f599199b456f377443079da030f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.459545 kubelet[2789]: E0123 01:10:04.459198 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80795d457a6498c0dc58f41a0a5447f339f69f599199b456f377443079da030f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-47s7m" Jan 23 01:10:04.459545 kubelet[2789]: E0123 01:10:04.459232 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80795d457a6498c0dc58f41a0a5447f339f69f599199b456f377443079da030f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-47s7m" Jan 23 01:10:04.459545 kubelet[2789]: E0123 01:10:04.459299 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80795d457a6498c0dc58f41a0a5447f339f69f599199b456f377443079da030f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:04.460528 kubelet[2789]: E0123 01:10:04.460498 2789 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:10:04.460654 kubelet[2789]: E0123 01:10:04.460595 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce5d673-fa45-45a7-8888-a98708c476b0-config-volume podName:5ce5d673-fa45-45a7-8888-a98708c476b0 nodeName:}" failed. No retries permitted until 2026-01-23 01:10:04.960569973 +0000 UTC m=+34.033078854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5ce5d673-fa45-45a7-8888-a98708c476b0-config-volume") pod "coredns-668d6bf9bc-l6md5" (UID: "5ce5d673-fa45-45a7-8888-a98708c476b0") : failed to sync configmap cache: timed out waiting for the condition Jan 23 01:10:04.465262 containerd[1569]: time="2026-01-23T01:10:04.465211844Z" level=error msg="Failed to destroy network for sandbox \"f014f8c2459faff0f4f616160310e1e9cd4e29eb9cff49ec63fa28d17da2fe34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.469294 containerd[1569]: time="2026-01-23T01:10:04.469236294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-4z2jg,Uid:ae9746a2-a617-45a0-ab4a-8c3ff369f251,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f014f8c2459faff0f4f616160310e1e9cd4e29eb9cff49ec63fa28d17da2fe34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.476942 kubelet[2789]: E0123 01:10:04.475054 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f014f8c2459faff0f4f616160310e1e9cd4e29eb9cff49ec63fa28d17da2fe34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.476942 kubelet[2789]: E0123 01:10:04.475137 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f014f8c2459faff0f4f616160310e1e9cd4e29eb9cff49ec63fa28d17da2fe34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" Jan 23 01:10:04.476942 kubelet[2789]: E0123 01:10:04.475171 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f014f8c2459faff0f4f616160310e1e9cd4e29eb9cff49ec63fa28d17da2fe34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" Jan 23 01:10:04.477200 kubelet[2789]: E0123 01:10:04.475241 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d785d7599-4z2jg_calico-apiserver(ae9746a2-a617-45a0-ab4a-8c3ff369f251)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d785d7599-4z2jg_calico-apiserver(ae9746a2-a617-45a0-ab4a-8c3ff369f251)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f014f8c2459faff0f4f616160310e1e9cd4e29eb9cff49ec63fa28d17da2fe34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:10:04.486074 containerd[1569]: time="2026-01-23T01:10:04.486012236Z" level=error msg="Failed to destroy network for sandbox \"0413f30d967f8879e9be641c591b18470386ee6695dcdc73381a418eabfeca91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.488343 containerd[1569]: time="2026-01-23T01:10:04.488286258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-c6hvd,Uid:107c88ad-a23e-4977-926b-0153678bb502,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0413f30d967f8879e9be641c591b18470386ee6695dcdc73381a418eabfeca91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.488719 kubelet[2789]: E0123 01:10:04.488669 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0413f30d967f8879e9be641c591b18470386ee6695dcdc73381a418eabfeca91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.488868 kubelet[2789]: E0123 01:10:04.488747 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0413f30d967f8879e9be641c591b18470386ee6695dcdc73381a418eabfeca91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" Jan 23 01:10:04.488868 kubelet[2789]: E0123 01:10:04.488776 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0413f30d967f8879e9be641c591b18470386ee6695dcdc73381a418eabfeca91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" Jan 23 01:10:04.489744 kubelet[2789]: E0123 01:10:04.488853 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d785d7599-c6hvd_calico-apiserver(107c88ad-a23e-4977-926b-0153678bb502)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d785d7599-c6hvd_calico-apiserver(107c88ad-a23e-4977-926b-0153678bb502)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0413f30d967f8879e9be641c591b18470386ee6695dcdc73381a418eabfeca91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:04.494528 containerd[1569]: time="2026-01-23T01:10:04.494450461Z" level=error msg="Failed to destroy network for sandbox \"55f2e58533f512f1681bef153f54773dc9e76989072c20d9f8b836aca1c6a2fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.496387 containerd[1569]: time="2026-01-23T01:10:04.496331594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7457f998-hbh8m,Uid:98f87fa8-1efc-406b-89d8-4c32839b99d2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f2e58533f512f1681bef153f54773dc9e76989072c20d9f8b836aca1c6a2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.496649 kubelet[2789]: E0123 01:10:04.496582 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f2e58533f512f1681bef153f54773dc9e76989072c20d9f8b836aca1c6a2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:04.496757 kubelet[2789]: E0123 01:10:04.496649 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f2e58533f512f1681bef153f54773dc9e76989072c20d9f8b836aca1c6a2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f7457f998-hbh8m" Jan 23 01:10:04.496757 kubelet[2789]: E0123 01:10:04.496679 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55f2e58533f512f1681bef153f54773dc9e76989072c20d9f8b836aca1c6a2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f7457f998-hbh8m" Jan 23 01:10:04.496757 kubelet[2789]: E0123 01:10:04.496733 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f7457f998-hbh8m_calico-system(98f87fa8-1efc-406b-89d8-4c32839b99d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f7457f998-hbh8m_calico-system(98f87fa8-1efc-406b-89d8-4c32839b99d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55f2e58533f512f1681bef153f54773dc9e76989072c20d9f8b836aca1c6a2fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f7457f998-hbh8m" podUID="98f87fa8-1efc-406b-89d8-4c32839b99d2" Jan 23 01:10:05.100814 containerd[1569]: time="2026-01-23T01:10:05.100762990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l6md5,Uid:5ce5d673-fa45-45a7-8888-a98708c476b0,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:05.159369 systemd[1]: run-netns-cni\x2dacbb9eb8\x2d5350\x2d395e\x2de573\x2d4d4a3f2d1413.mount: Deactivated successfully. Jan 23 01:10:05.160081 systemd[1]: run-netns-cni\x2db7ba9315\x2d4c14\x2dd359\x2d6fd0\x2d51271dc391f1.mount: Deactivated successfully. Jan 23 01:10:05.160592 systemd[1]: run-netns-cni\x2d1c79804a\x2dd1b3\x2d7757\x2dedf7\x2d3e7c9dec55e7.mount: Deactivated successfully. Jan 23 01:10:05.172248 containerd[1569]: time="2026-01-23T01:10:05.170143073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8qp5z,Uid:54a633b2-3fb5-43ea-815f-68be219af473,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:05.236703 containerd[1569]: time="2026-01-23T01:10:05.236566799Z" level=error msg="Failed to destroy network for sandbox \"31f0073f2098e7fae3cb2d80afa35e985f6adf9ac5d68689b80d7512a9ec7706\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:05.245223 systemd[1]: run-netns-cni\x2d64662f42\x2d9644\x2dd458\x2d50c9\x2d6fda9453c961.mount: Deactivated successfully. Jan 23 01:10:05.248867 containerd[1569]: time="2026-01-23T01:10:05.246532037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l6md5,Uid:5ce5d673-fa45-45a7-8888-a98708c476b0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31f0073f2098e7fae3cb2d80afa35e985f6adf9ac5d68689b80d7512a9ec7706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:05.249087 kubelet[2789]: E0123 01:10:05.247338 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31f0073f2098e7fae3cb2d80afa35e985f6adf9ac5d68689b80d7512a9ec7706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:05.249087 kubelet[2789]: E0123 01:10:05.247661 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31f0073f2098e7fae3cb2d80afa35e985f6adf9ac5d68689b80d7512a9ec7706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l6md5" Jan 23 01:10:05.249087 kubelet[2789]: E0123 01:10:05.247824 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31f0073f2098e7fae3cb2d80afa35e985f6adf9ac5d68689b80d7512a9ec7706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l6md5" Jan 23 01:10:05.249294 kubelet[2789]: E0123 01:10:05.248078 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l6md5_kube-system(5ce5d673-fa45-45a7-8888-a98708c476b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l6md5_kube-system(5ce5d673-fa45-45a7-8888-a98708c476b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31f0073f2098e7fae3cb2d80afa35e985f6adf9ac5d68689b80d7512a9ec7706\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l6md5" podUID="5ce5d673-fa45-45a7-8888-a98708c476b0" Jan 23 01:10:05.361815 containerd[1569]: time="2026-01-23T01:10:05.360565104Z" level=error msg="Failed to destroy network for sandbox \"9e05d25e3352624a281079ce201cd72d117abe6833ca8d16e049391e0bdc3423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:05.364947 containerd[1569]: time="2026-01-23T01:10:05.364826304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8qp5z,Uid:54a633b2-3fb5-43ea-815f-68be219af473,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e05d25e3352624a281079ce201cd72d117abe6833ca8d16e049391e0bdc3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:05.368938 kubelet[2789]: E0123 01:10:05.367077 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e05d25e3352624a281079ce201cd72d117abe6833ca8d16e049391e0bdc3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:05.368938 kubelet[2789]: E0123 01:10:05.367156 2789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e05d25e3352624a281079ce201cd72d117abe6833ca8d16e049391e0bdc3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8qp5z" Jan 23 01:10:05.368938 kubelet[2789]: E0123 01:10:05.367220 2789 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e05d25e3352624a281079ce201cd72d117abe6833ca8d16e049391e0bdc3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8qp5z" Jan 23 01:10:05.369551 kubelet[2789]: E0123 01:10:05.367275 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8qp5z_kube-system(54a633b2-3fb5-43ea-815f-68be219af473)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8qp5z_kube-system(54a633b2-3fb5-43ea-815f-68be219af473)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e05d25e3352624a281079ce201cd72d117abe6833ca8d16e049391e0bdc3423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8qp5z" podUID="54a633b2-3fb5-43ea-815f-68be219af473" Jan 23 01:10:05.370202 systemd[1]: run-netns-cni\x2dcac71bab\x2d3bab\x2d4177\x2df3bb\x2d593411a016fb.mount: Deactivated successfully. Jan 23 01:10:11.204761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103262248.mount: Deactivated successfully. Jan 23 01:10:11.241447 containerd[1569]: time="2026-01-23T01:10:11.241372024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:11.243618 containerd[1569]: time="2026-01-23T01:10:11.243568084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:10:11.245095 containerd[1569]: time="2026-01-23T01:10:11.245018451Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:11.248716 containerd[1569]: time="2026-01-23T01:10:11.248636015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:11.249757 containerd[1569]: time="2026-01-23T01:10:11.249564046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.89381567s" Jan 23 01:10:11.249757 containerd[1569]: time="2026-01-23T01:10:11.249610569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:10:11.270040 containerd[1569]: time="2026-01-23T01:10:11.269989076Z" level=info msg="CreateContainer within sandbox \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:10:11.290709 containerd[1569]: time="2026-01-23T01:10:11.290091584Z" level=info msg="Container 7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:11.306716 containerd[1569]: time="2026-01-23T01:10:11.306645623Z" level=info msg="CreateContainer within sandbox \"4b021b0aaa26053ca97d1ed6dbd5e0f444cde526f962656f7789603218bb42e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b\"" Jan 23 01:10:11.307548 containerd[1569]: time="2026-01-23T01:10:11.307490506Z" level=info msg="StartContainer for \"7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b\"" Jan 23 01:10:11.310383 containerd[1569]: time="2026-01-23T01:10:11.310300150Z" level=info msg="connecting to shim 7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b" address="unix:///run/containerd/s/13448cc614ae36621e3570bb56783850e0deadb18c373699e13111f0ef38ea6f" protocol=ttrpc version=3 Jan 23 01:10:11.335290 systemd[1]: Started cri-containerd-7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b.scope - libcontainer container 7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b. Jan 23 01:10:11.444167 containerd[1569]: time="2026-01-23T01:10:11.444091440Z" level=info msg="StartContainer for \"7e2839d49d3030bb3a5dac71c646a158143d35bcc9d538eae29afe7d2fc15a2b\" returns successfully" Jan 23 01:10:11.576424 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:10:11.576787 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:10:11.801394 kubelet[2789]: I0123 01:10:11.801337 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-backend-key-pair\") pod \"98f87fa8-1efc-406b-89d8-4c32839b99d2\" (UID: \"98f87fa8-1efc-406b-89d8-4c32839b99d2\") " Jan 23 01:10:11.802215 kubelet[2789]: I0123 01:10:11.801444 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-ca-bundle\") pod \"98f87fa8-1efc-406b-89d8-4c32839b99d2\" (UID: \"98f87fa8-1efc-406b-89d8-4c32839b99d2\") " Jan 23 01:10:11.802215 kubelet[2789]: I0123 01:10:11.801532 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxwzj\" (UniqueName: \"kubernetes.io/projected/98f87fa8-1efc-406b-89d8-4c32839b99d2-kube-api-access-wxwzj\") pod \"98f87fa8-1efc-406b-89d8-4c32839b99d2\" (UID: \"98f87fa8-1efc-406b-89d8-4c32839b99d2\") " Jan 23 01:10:11.803387 kubelet[2789]: I0123 01:10:11.803338 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "98f87fa8-1efc-406b-89d8-4c32839b99d2" (UID: "98f87fa8-1efc-406b-89d8-4c32839b99d2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:10:11.808825 kubelet[2789]: I0123 01:10:11.808671 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f87fa8-1efc-406b-89d8-4c32839b99d2-kube-api-access-wxwzj" (OuterVolumeSpecName: "kube-api-access-wxwzj") pod "98f87fa8-1efc-406b-89d8-4c32839b99d2" (UID: "98f87fa8-1efc-406b-89d8-4c32839b99d2"). InnerVolumeSpecName "kube-api-access-wxwzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:10:11.812225 kubelet[2789]: I0123 01:10:11.812150 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "98f87fa8-1efc-406b-89d8-4c32839b99d2" (UID: "98f87fa8-1efc-406b-89d8-4c32839b99d2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:10:11.903269 kubelet[2789]: I0123 01:10:11.903206 2789 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-backend-key-pair\") on node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" DevicePath \"\"" Jan 23 01:10:11.903269 kubelet[2789]: I0123 01:10:11.903265 2789 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f87fa8-1efc-406b-89d8-4c32839b99d2-whisker-ca-bundle\") on node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" DevicePath \"\"" Jan 23 01:10:11.903269 kubelet[2789]: I0123 01:10:11.903282 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxwzj\" (UniqueName: \"kubernetes.io/projected/98f87fa8-1efc-406b-89d8-4c32839b99d2-kube-api-access-wxwzj\") on node \"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba\" DevicePath \"\"" Jan 23 01:10:12.203445 systemd[1]: var-lib-kubelet-pods-98f87fa8\x2d1efc\x2d406b\x2d89d8\x2d4c32839b99d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwxwzj.mount: Deactivated successfully. Jan 23 01:10:12.203599 systemd[1]: var-lib-kubelet-pods-98f87fa8\x2d1efc\x2d406b\x2d89d8\x2d4c32839b99d2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:10:12.404402 systemd[1]: Removed slice kubepods-besteffort-pod98f87fa8_1efc_406b_89d8_4c32839b99d2.slice - libcontainer container kubepods-besteffort-pod98f87fa8_1efc_406b_89d8_4c32839b99d2.slice. Jan 23 01:10:12.420549 kubelet[2789]: I0123 01:10:12.419326 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-88968" podStartSLOduration=2.283298232 podStartE2EDuration="18.419303328s" podCreationTimestamp="2026-01-23 01:09:54 +0000 UTC" firstStartedPulling="2026-01-23 01:09:55.114690951 +0000 UTC m=+24.187199805" lastFinishedPulling="2026-01-23 01:10:11.250696055 +0000 UTC m=+40.323204901" observedRunningTime="2026-01-23 01:10:12.418128041 +0000 UTC m=+41.490636908" watchObservedRunningTime="2026-01-23 01:10:12.419303328 +0000 UTC m=+41.491812195" Jan 23 01:10:12.502417 systemd[1]: Created slice kubepods-besteffort-pod2c3341c5_23de_41f7_b063_be3670a7e004.slice - libcontainer container kubepods-besteffort-pod2c3341c5_23de_41f7_b063_be3670a7e004.slice. Jan 23 01:10:12.608897 kubelet[2789]: I0123 01:10:12.608794 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2c3341c5-23de-41f7-b063-be3670a7e004-whisker-backend-key-pair\") pod \"whisker-59dcfd5f76-f9f85\" (UID: \"2c3341c5-23de-41f7-b063-be3670a7e004\") " pod="calico-system/whisker-59dcfd5f76-f9f85" Jan 23 01:10:12.609120 kubelet[2789]: I0123 01:10:12.609012 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3341c5-23de-41f7-b063-be3670a7e004-whisker-ca-bundle\") pod \"whisker-59dcfd5f76-f9f85\" (UID: \"2c3341c5-23de-41f7-b063-be3670a7e004\") " pod="calico-system/whisker-59dcfd5f76-f9f85" Jan 23 01:10:12.609120 kubelet[2789]: I0123 01:10:12.609079 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42pw\" (UniqueName: \"kubernetes.io/projected/2c3341c5-23de-41f7-b063-be3670a7e004-kube-api-access-s42pw\") pod \"whisker-59dcfd5f76-f9f85\" (UID: \"2c3341c5-23de-41f7-b063-be3670a7e004\") " pod="calico-system/whisker-59dcfd5f76-f9f85" Jan 23 01:10:12.810238 containerd[1569]: time="2026-01-23T01:10:12.810094162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59dcfd5f76-f9f85,Uid:2c3341c5-23de-41f7-b063-be3670a7e004,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:12.958017 systemd-networkd[1460]: cali4a1de1b79b4: Link UP Jan 23 01:10:12.960875 systemd-networkd[1460]: cali4a1de1b79b4: Gained carrier Jan 23 01:10:12.987151 containerd[1569]: 2026-01-23 01:10:12.848 [INFO][3882] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:10:12.987151 containerd[1569]: 2026-01-23 01:10:12.861 [INFO][3882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0 whisker-59dcfd5f76- calico-system 2c3341c5-23de-41f7-b063-be3670a7e004 904 0 2026-01-23 01:10:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59dcfd5f76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba whisker-59dcfd5f76-f9f85 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4a1de1b79b4 [] [] }} ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-" Jan 23 01:10:12.987151 containerd[1569]: 2026-01-23 01:10:12.861 [INFO][3882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:12.987151 containerd[1569]: 2026-01-23 01:10:12.896 [INFO][3894] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" HandleID="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.896 [INFO][3894] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" HandleID="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"whisker-59dcfd5f76-f9f85", "timestamp":"2026-01-23 01:10:12.896232285 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.896 [INFO][3894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.896 [INFO][3894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.896 [INFO][3894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.906 [INFO][3894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.914 [INFO][3894] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.920 [INFO][3894] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.987532 containerd[1569]: 2026-01-23 01:10:12.922 [INFO][3894] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.925 [INFO][3894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.925 [INFO][3894] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.927 [INFO][3894] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914 Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.934 [INFO][3894] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.941 [INFO][3894] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.65/26] block=192.168.114.64/26 handle="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.941 [INFO][3894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.65/26] handle="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.941 [INFO][3894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:12.988266 containerd[1569]: 2026-01-23 01:10:12.941 [INFO][3894] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.65/26] IPv6=[] ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" HandleID="k8s-pod-network.fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:12.988624 containerd[1569]: 2026-01-23 01:10:12.945 [INFO][3882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0", GenerateName:"whisker-59dcfd5f76-", Namespace:"calico-system", SelfLink:"", UID:"2c3341c5-23de-41f7-b063-be3670a7e004", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59dcfd5f76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"whisker-59dcfd5f76-f9f85", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a1de1b79b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:12.988750 containerd[1569]: 2026-01-23 01:10:12.945 [INFO][3882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.65/32] ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:12.988750 containerd[1569]: 2026-01-23 01:10:12.945 [INFO][3882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a1de1b79b4 ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:12.988750 containerd[1569]: 2026-01-23 01:10:12.963 [INFO][3882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:12.988902 containerd[1569]: 2026-01-23 01:10:12.968 [INFO][3882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0", GenerateName:"whisker-59dcfd5f76-", Namespace:"calico-system", SelfLink:"", UID:"2c3341c5-23de-41f7-b063-be3670a7e004", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59dcfd5f76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914", Pod:"whisker-59dcfd5f76-f9f85", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a1de1b79b4", MAC:"72:ef:8c:1c:36:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:12.989033 containerd[1569]: 2026-01-23 01:10:12.983 [INFO][3882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" Namespace="calico-system" Pod="whisker-59dcfd5f76-f9f85" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-whisker--59dcfd5f76--f9f85-eth0" Jan 23 01:10:13.026554 containerd[1569]: time="2026-01-23T01:10:13.026482485Z" level=info msg="connecting to shim fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914" address="unix:///run/containerd/s/fe8c5f324253f6d23a7bd8ec79a3b399849c2198b4a7fd7cfbb9e7af6b3fb9bc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:13.063155 systemd[1]: Started cri-containerd-fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914.scope - libcontainer container fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914. Jan 23 01:10:13.145192 kubelet[2789]: I0123 01:10:13.145139 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f87fa8-1efc-406b-89d8-4c32839b99d2" path="/var/lib/kubelet/pods/98f87fa8-1efc-406b-89d8-4c32839b99d2/volumes" Jan 23 01:10:13.257490 containerd[1569]: time="2026-01-23T01:10:13.257427640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59dcfd5f76-f9f85,Uid:2c3341c5-23de-41f7-b063-be3670a7e004,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc3bee0ef4e96bc59ca5fc271c52543ecd161c75d5b7252dc4de83f23501c914\"" Jan 23 01:10:13.262730 containerd[1569]: time="2026-01-23T01:10:13.262670709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:10:13.421152 containerd[1569]: time="2026-01-23T01:10:13.420888049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:13.425167 containerd[1569]: time="2026-01-23T01:10:13.425101664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:10:13.425667 containerd[1569]: time="2026-01-23T01:10:13.425336205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:10:13.427233 kubelet[2789]: E0123 01:10:13.425953 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:13.427233 kubelet[2789]: E0123 01:10:13.426141 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:13.427415 kubelet[2789]: E0123 01:10:13.426390 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d58f6313d7924c0db85c786610a81b82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:13.429751 containerd[1569]: time="2026-01-23T01:10:13.429717019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:10:13.591937 containerd[1569]: time="2026-01-23T01:10:13.591314016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:13.593808 containerd[1569]: time="2026-01-23T01:10:13.593727983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:10:13.594322 containerd[1569]: time="2026-01-23T01:10:13.594203348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:10:13.595943 kubelet[2789]: E0123 01:10:13.595711 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:13.596112 kubelet[2789]: E0123 01:10:13.596071 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:13.596860 kubelet[2789]: E0123 01:10:13.596757 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:13.598176 kubelet[2789]: E0123 01:10:13.598094 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:10:14.079204 systemd-networkd[1460]: cali4a1de1b79b4: Gained IPv6LL Jan 23 01:10:14.397821 kubelet[2789]: E0123 01:10:14.397682 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:10:16.137813 containerd[1569]: time="2026-01-23T01:10:16.137736401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7456665ddf-rb6bt,Uid:064b559c-bfe9-4534-b533-689a0c2791a2,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:16.285994 systemd-networkd[1460]: cali7742db83306: Link UP Jan 23 01:10:16.288828 systemd-networkd[1460]: cali7742db83306: Gained carrier Jan 23 01:10:16.316118 containerd[1569]: 2026-01-23 01:10:16.182 [INFO][4121] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:10:16.316118 containerd[1569]: 2026-01-23 01:10:16.197 [INFO][4121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0 calico-kube-controllers-7456665ddf- calico-system 064b559c-bfe9-4534-b533-689a0c2791a2 830 0 2026-01-23 01:09:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7456665ddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba calico-kube-controllers-7456665ddf-rb6bt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7742db83306 [] [] }} ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-" Jan 23 01:10:16.316118 containerd[1569]: 2026-01-23 01:10:16.197 [INFO][4121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.316454 containerd[1569]: 2026-01-23 01:10:16.231 [INFO][4133] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" HandleID="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.316454 containerd[1569]: 2026-01-23 01:10:16.231 [INFO][4133] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" HandleID="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"calico-kube-controllers-7456665ddf-rb6bt", "timestamp":"2026-01-23 01:10:16.23164479 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:16.316454 containerd[1569]: 2026-01-23 01:10:16.231 [INFO][4133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:16.316454 containerd[1569]: 2026-01-23 01:10:16.232 [INFO][4133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:16.316454 containerd[1569]: 2026-01-23 01:10:16.232 [INFO][4133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:16.316454 containerd[1569]: 2026-01-23 01:10:16.245 [INFO][4133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.251 [INFO][4133] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.256 [INFO][4133] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.259 [INFO][4133] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.262 [INFO][4133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.262 [INFO][4133] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.264 [INFO][4133] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815 Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.270 [INFO][4133] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.278 [INFO][4133] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.66/26] block=192.168.114.64/26 handle="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.278 [INFO][4133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.66/26] handle="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:16.316760 containerd[1569]: 2026-01-23 01:10:16.278 [INFO][4133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:16.318109 containerd[1569]: 2026-01-23 01:10:16.278 [INFO][4133] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.66/26] IPv6=[] ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" HandleID="k8s-pod-network.21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.318224 containerd[1569]: 2026-01-23 01:10:16.280 [INFO][4121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0", GenerateName:"calico-kube-controllers-7456665ddf-", Namespace:"calico-system", SelfLink:"", UID:"064b559c-bfe9-4534-b533-689a0c2791a2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7456665ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"calico-kube-controllers-7456665ddf-rb6bt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7742db83306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:16.318224 containerd[1569]: 2026-01-23 01:10:16.281 [INFO][4121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.66/32] ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.318224 containerd[1569]: 2026-01-23 01:10:16.281 [INFO][4121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7742db83306 ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.318224 containerd[1569]: 2026-01-23 01:10:16.288 [INFO][4121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.318224 containerd[1569]: 2026-01-23 01:10:16.291 [INFO][4121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0", GenerateName:"calico-kube-controllers-7456665ddf-", Namespace:"calico-system", SelfLink:"", UID:"064b559c-bfe9-4534-b533-689a0c2791a2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7456665ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815", Pod:"calico-kube-controllers-7456665ddf-rb6bt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7742db83306", MAC:"66:8c:a9:28:c7:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:16.318854 containerd[1569]: 2026-01-23 01:10:16.309 [INFO][4121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" Namespace="calico-system" Pod="calico-kube-controllers-7456665ddf-rb6bt" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--kube--controllers--7456665ddf--rb6bt-eth0" Jan 23 01:10:16.350689 containerd[1569]: time="2026-01-23T01:10:16.350626982Z" level=info msg="connecting to shim 21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815" address="unix:///run/containerd/s/046caa1bc8a25fd1fbd63ef3701db2186a4ea130f5e434062bba1f5bbb9a0573" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:16.390209 systemd[1]: Started cri-containerd-21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815.scope - libcontainer container 21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815. Jan 23 01:10:16.463091 containerd[1569]: time="2026-01-23T01:10:16.463035774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7456665ddf-rb6bt,Uid:064b559c-bfe9-4534-b533-689a0c2791a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"21279d5ef9f747a1a1af872e3992f6e5fc82ff14741df4130c2a3642b36ce815\"" Jan 23 01:10:16.465645 containerd[1569]: time="2026-01-23T01:10:16.465604572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:10:16.632370 containerd[1569]: time="2026-01-23T01:10:16.632293318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:16.634155 containerd[1569]: time="2026-01-23T01:10:16.634094930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:10:16.634408 containerd[1569]: time="2026-01-23T01:10:16.634214634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:10:16.634517 kubelet[2789]: E0123 01:10:16.634433 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:10:16.634517 kubelet[2789]: E0123 01:10:16.634500 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:10:16.635447 kubelet[2789]: E0123 01:10:16.634727 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7456665ddf-rb6bt_calico-system(064b559c-bfe9-4534-b533-689a0c2791a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:16.636648 kubelet[2789]: E0123 01:10:16.636573 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:10:17.036944 kubelet[2789]: I0123 01:10:17.036400 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:10:17.139029 containerd[1569]: time="2026-01-23T01:10:17.138720237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rx4sc,Uid:f817b277-502c-42fa-96de-77b7a2b164dc,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:17.141794 containerd[1569]: time="2026-01-23T01:10:17.140722710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47s7m,Uid:88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:17.150951 containerd[1569]: time="2026-01-23T01:10:17.148787184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-c6hvd,Uid:107c88ad-a23e-4977-926b-0153678bb502,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:10:17.411644 kubelet[2789]: E0123 01:10:17.411543 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:10:17.441372 systemd-networkd[1460]: calie3e851d3222: Link UP Jan 23 01:10:17.445222 systemd-networkd[1460]: calie3e851d3222: Gained carrier Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.275 [INFO][4234] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.306 [INFO][4234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0 calico-apiserver-5d785d7599- calico-apiserver 107c88ad-a23e-4977-926b-0153678bb502 835 0 2026-01-23 01:09:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d785d7599 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba calico-apiserver-5d785d7599-c6hvd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3e851d3222 [] [] }} ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.306 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.363 [INFO][4262] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" HandleID="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.363 [INFO][4262] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" HandleID="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"calico-apiserver-5d785d7599-c6hvd", "timestamp":"2026-01-23 01:10:17.363324697 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.363 [INFO][4262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.363 [INFO][4262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.364 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.375 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.383 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.390 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.393 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.397 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.397 [INFO][4262] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.400 [INFO][4262] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32 Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.407 [INFO][4262] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.468053 containerd[1569]: 2026-01-23 01:10:17.423 [INFO][4262] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.67/26] block=192.168.114.64/26 handle="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.423 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.67/26] handle="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.423 [INFO][4262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.423 [INFO][4262] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.67/26] IPv6=[] ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" HandleID="k8s-pod-network.85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.428 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0", GenerateName:"calico-apiserver-5d785d7599-", Namespace:"calico-apiserver", SelfLink:"", UID:"107c88ad-a23e-4977-926b-0153678bb502", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d785d7599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"calico-apiserver-5d785d7599-c6hvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e851d3222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.429 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.67/32] ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.429 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3e851d3222 ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.469185 containerd[1569]: 2026-01-23 01:10:17.446 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.469868 containerd[1569]: 2026-01-23 01:10:17.447 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0", GenerateName:"calico-apiserver-5d785d7599-", Namespace:"calico-apiserver", SelfLink:"", UID:"107c88ad-a23e-4977-926b-0153678bb502", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d785d7599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32", Pod:"calico-apiserver-5d785d7599-c6hvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e851d3222", MAC:"b6:db:ed:74:88:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:17.469868 containerd[1569]: 2026-01-23 01:10:17.463 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-c6hvd" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--c6hvd-eth0" Jan 23 01:10:17.510414 containerd[1569]: time="2026-01-23T01:10:17.510327009Z" level=info msg="connecting to shim 85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32" address="unix:///run/containerd/s/4fb61683dc683827bceded392ebd41b6a089d811477909b80f5c2ce2f60a7157" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:17.535179 systemd-networkd[1460]: cali7742db83306: Gained IPv6LL Jan 23 01:10:17.570876 systemd-networkd[1460]: cali297406b9274: Link UP Jan 23 01:10:17.571281 systemd-networkd[1460]: cali297406b9274: Gained carrier Jan 23 01:10:17.573761 systemd[1]: Started cri-containerd-85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32.scope - libcontainer container 85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32. Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.271 [INFO][4224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.298 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0 csi-node-driver- calico-system 88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e 736 0 2026-01-23 01:09:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba csi-node-driver-47s7m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali297406b9274 [] [] }} ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.298 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.368 [INFO][4257] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" HandleID="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.369 [INFO][4257] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" HandleID="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"csi-node-driver-47s7m", "timestamp":"2026-01-23 01:10:17.368445571 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.370 [INFO][4257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.423 [INFO][4257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.424 [INFO][4257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.476 [INFO][4257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.483 [INFO][4257] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.504 [INFO][4257] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.511 [INFO][4257] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.515 [INFO][4257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.516 [INFO][4257] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.518 [INFO][4257] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848 Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.528 [INFO][4257] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.605393 containerd[1569]: 2026-01-23 01:10:17.547 [INFO][4257] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.68/26] block=192.168.114.64/26 handle="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.548 [INFO][4257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.68/26] handle="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.548 [INFO][4257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.548 [INFO][4257] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.68/26] IPv6=[] ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" HandleID="k8s-pod-network.27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.555 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"csi-node-driver-47s7m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali297406b9274", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.555 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.68/32] ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.555 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali297406b9274 ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.607769 containerd[1569]: 2026-01-23 01:10:17.560 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.610080 containerd[1569]: 2026-01-23 01:10:17.560 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848", Pod:"csi-node-driver-47s7m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali297406b9274", MAC:"fe:1c:40:ce:11:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:17.610080 containerd[1569]: 2026-01-23 01:10:17.592 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" Namespace="calico-system" Pod="csi-node-driver-47s7m" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-csi--node--driver--47s7m-eth0" Jan 23 01:10:17.650655 containerd[1569]: time="2026-01-23T01:10:17.650588483Z" level=info msg="connecting to shim 27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848" address="unix:///run/containerd/s/27abe9f5316f39ab917034eb9c4394fa859e5257f3b5ea60f9ee61b99f781c69" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:17.674427 systemd-networkd[1460]: cali79ee2a58aa5: Link UP Jan 23 01:10:17.678256 systemd-networkd[1460]: cali79ee2a58aa5: Gained carrier Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.246 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.273 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0 goldmane-666569f655- calico-system f817b277-502c-42fa-96de-77b7a2b164dc 836 0 2026-01-23 01:09:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba goldmane-666569f655-rx4sc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali79ee2a58aa5 [] [] }} ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.273 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.374 [INFO][4250] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" HandleID="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.374 [INFO][4250] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" HandleID="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037ccf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"goldmane-666569f655-rx4sc", "timestamp":"2026-01-23 01:10:17.374038638 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.374 [INFO][4250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.548 [INFO][4250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.549 [INFO][4250] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.577 [INFO][4250] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.599 [INFO][4250] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.613 [INFO][4250] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.618 [INFO][4250] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.622 [INFO][4250] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.622 [INFO][4250] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.625 [INFO][4250] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.639 [INFO][4250] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.659 [INFO][4250] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.69/26] block=192.168.114.64/26 handle="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.705311 containerd[1569]: 2026-01-23 01:10:17.659 [INFO][4250] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.69/26] handle="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:17.707305 containerd[1569]: 2026-01-23 01:10:17.660 [INFO][4250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:17.707305 containerd[1569]: 2026-01-23 01:10:17.660 [INFO][4250] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.69/26] IPv6=[] ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" HandleID="k8s-pod-network.bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.707305 containerd[1569]: 2026-01-23 01:10:17.669 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f817b277-502c-42fa-96de-77b7a2b164dc", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"goldmane-666569f655-rx4sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali79ee2a58aa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:17.707305 containerd[1569]: 2026-01-23 01:10:17.669 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.69/32] ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.707305 containerd[1569]: 2026-01-23 01:10:17.669 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79ee2a58aa5 ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.707305 containerd[1569]: 2026-01-23 01:10:17.679 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.707668 containerd[1569]: 2026-01-23 01:10:17.681 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f817b277-502c-42fa-96de-77b7a2b164dc", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f", Pod:"goldmane-666569f655-rx4sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali79ee2a58aa5", MAC:"3a:3d:cb:cd:f9:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:17.707668 containerd[1569]: 2026-01-23 01:10:17.700 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" Namespace="calico-system" Pod="goldmane-666569f655-rx4sc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-goldmane--666569f655--rx4sc-eth0" Jan 23 01:10:17.740226 systemd[1]: Started cri-containerd-27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848.scope - libcontainer container 27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848. Jan 23 01:10:17.760368 containerd[1569]: time="2026-01-23T01:10:17.760240196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-c6hvd,Uid:107c88ad-a23e-4977-926b-0153678bb502,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"85f21ae87d5af1cfe99873ae780c768025cd35d6d0f80fa046e5af2e6dd6ff32\"" Jan 23 01:10:17.765113 containerd[1569]: time="2026-01-23T01:10:17.765068507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:10:17.774693 containerd[1569]: time="2026-01-23T01:10:17.774624838Z" level=info msg="connecting to shim bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f" address="unix:///run/containerd/s/48b799fe28ec29ee73c4ffb638382dcee039c5b8fcb0ad9a62cede7859ef5a0c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:17.826267 containerd[1569]: time="2026-01-23T01:10:17.826042534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47s7m,Uid:88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e,Namespace:calico-system,Attempt:0,} returns sandbox id \"27a6354252f0acc49f1c2b2b5cd7d675bf51121b213b4ccbe5ce0b2917f62848\"" Jan 23 01:10:17.848476 systemd[1]: Started cri-containerd-bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f.scope - libcontainer container bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f. Jan 23 01:10:17.942431 containerd[1569]: time="2026-01-23T01:10:17.942109264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:17.946087 containerd[1569]: time="2026-01-23T01:10:17.945952247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:10:17.947548 containerd[1569]: time="2026-01-23T01:10:17.946977616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:17.948171 kubelet[2789]: E0123 01:10:17.948111 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:17.949157 kubelet[2789]: E0123 01:10:17.948200 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:17.949157 kubelet[2789]: E0123 01:10:17.948572 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rwh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-c6hvd_calico-apiserver(107c88ad-a23e-4977-926b-0153678bb502): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:17.950659 containerd[1569]: time="2026-01-23T01:10:17.949745682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:10:17.950853 kubelet[2789]: E0123 01:10:17.950054 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:18.001651 containerd[1569]: time="2026-01-23T01:10:18.001224855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rx4sc,Uid:f817b277-502c-42fa-96de-77b7a2b164dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb6065c9e2f1f6ce95ad053c8f4905edeb16095a8362b07e12a9be66414dc31f\"" Jan 23 01:10:18.120724 containerd[1569]: time="2026-01-23T01:10:18.120666101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:18.122402 containerd[1569]: time="2026-01-23T01:10:18.122303163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:10:18.122659 containerd[1569]: time="2026-01-23T01:10:18.122346915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:10:18.123940 kubelet[2789]: E0123 01:10:18.123026 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:10:18.124335 kubelet[2789]: E0123 01:10:18.123144 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:10:18.125550 containerd[1569]: time="2026-01-23T01:10:18.124842575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:10:18.125659 kubelet[2789]: E0123 01:10:18.125089 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:18.139324 containerd[1569]: time="2026-01-23T01:10:18.137850328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8qp5z,Uid:54a633b2-3fb5-43ea-815f-68be219af473,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:18.301075 containerd[1569]: time="2026-01-23T01:10:18.300887547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:18.303656 containerd[1569]: time="2026-01-23T01:10:18.303294402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:10:18.303656 containerd[1569]: time="2026-01-23T01:10:18.303352974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:18.306111 kubelet[2789]: E0123 01:10:18.305112 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:10:18.306111 kubelet[2789]: E0123 01:10:18.305181 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:10:18.306111 kubelet[2789]: E0123 01:10:18.305649 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rx4sc_calico-system(f817b277-502c-42fa-96de-77b7a2b164dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:18.306503 containerd[1569]: time="2026-01-23T01:10:18.306315586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:10:18.307662 kubelet[2789]: E0123 01:10:18.307615 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:10:18.427736 kubelet[2789]: E0123 01:10:18.427666 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:10:18.435970 kubelet[2789]: E0123 01:10:18.435278 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:10:18.435970 kubelet[2789]: E0123 01:10:18.435424 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:18.458188 systemd-networkd[1460]: cali8b5e20b64a0: Link UP Jan 23 01:10:18.458524 systemd-networkd[1460]: cali8b5e20b64a0: Gained carrier Jan 23 01:10:18.487552 containerd[1569]: time="2026-01-23T01:10:18.487482241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:18.489246 containerd[1569]: time="2026-01-23T01:10:18.489169543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:10:18.489376 containerd[1569]: time="2026-01-23T01:10:18.489299004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:10:18.489856 kubelet[2789]: E0123 01:10:18.489771 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:10:18.490009 kubelet[2789]: E0123 01:10:18.489891 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:10:18.490382 kubelet[2789]: E0123 01:10:18.490284 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:18.491931 kubelet[2789]: E0123 01:10:18.491527 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.257 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0 coredns-668d6bf9bc- kube-system 54a633b2-3fb5-43ea-815f-68be219af473 833 0 2026-01-23 01:09:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba coredns-668d6bf9bc-8qp5z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b5e20b64a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.257 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.330 [INFO][4480] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" HandleID="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.332 [INFO][4480] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" HandleID="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"coredns-668d6bf9bc-8qp5z", "timestamp":"2026-01-23 01:10:18.330617175 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.332 [INFO][4480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.332 [INFO][4480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.332 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.346 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.357 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.363 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.366 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.374 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.375 [INFO][4480] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.380 [INFO][4480] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.387 [INFO][4480] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.437 [INFO][4480] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.70/26] block=192.168.114.64/26 handle="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.437 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.70/26] handle="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:18.551032 containerd[1569]: 2026-01-23 01:10:18.438 [INFO][4480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:18.555870 containerd[1569]: 2026-01-23 01:10:18.438 [INFO][4480] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.70/26] IPv6=[] ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" HandleID="k8s-pod-network.1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.555870 containerd[1569]: 2026-01-23 01:10:18.444 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"54a633b2-3fb5-43ea-815f-68be219af473", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"coredns-668d6bf9bc-8qp5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b5e20b64a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:18.555870 containerd[1569]: 2026-01-23 01:10:18.446 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.70/32] ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.555870 containerd[1569]: 2026-01-23 01:10:18.446 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b5e20b64a0 ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.555870 containerd[1569]: 2026-01-23 01:10:18.462 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.556331 containerd[1569]: 2026-01-23 01:10:18.464 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"54a633b2-3fb5-43ea-815f-68be219af473", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e", Pod:"coredns-668d6bf9bc-8qp5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b5e20b64a0", MAC:"be:9f:da:8f:8f:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:18.556331 containerd[1569]: 2026-01-23 01:10:18.546 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8qp5z" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--8qp5z-eth0" Jan 23 01:10:18.633189 containerd[1569]: time="2026-01-23T01:10:18.633127075Z" level=info msg="connecting to shim 1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e" address="unix:///run/containerd/s/593e656588b685b2db4f628ef176164d365efaa7d175d92ddea82c40bdfffb77" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:18.708326 systemd[1]: Started cri-containerd-1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e.scope - libcontainer container 1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e. Jan 23 01:10:18.815212 systemd-networkd[1460]: calie3e851d3222: Gained IPv6LL Jan 23 01:10:18.873048 containerd[1569]: time="2026-01-23T01:10:18.872960524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8qp5z,Uid:54a633b2-3fb5-43ea-815f-68be219af473,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e\"" Jan 23 01:10:18.879739 containerd[1569]: time="2026-01-23T01:10:18.879687974Z" level=info msg="CreateContainer within sandbox \"1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:10:18.907993 containerd[1569]: time="2026-01-23T01:10:18.905390300Z" level=info msg="Container e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:18.911318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859276926.mount: Deactivated successfully. Jan 23 01:10:18.934376 containerd[1569]: time="2026-01-23T01:10:18.934122986Z" level=info msg="CreateContainer within sandbox \"1e1aae322ebc5ee358ed7d1379a9399200cfeb97e87ac3dc04d90a9c69e3b29e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f\"" Jan 23 01:10:18.936370 containerd[1569]: time="2026-01-23T01:10:18.936322682Z" level=info msg="StartContainer for \"e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f\"" Jan 23 01:10:18.940943 containerd[1569]: time="2026-01-23T01:10:18.939957120Z" level=info msg="connecting to shim e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f" address="unix:///run/containerd/s/593e656588b685b2db4f628ef176164d365efaa7d175d92ddea82c40bdfffb77" protocol=ttrpc version=3 Jan 23 01:10:18.981229 systemd[1]: Started cri-containerd-e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f.scope - libcontainer container e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f. Jan 23 01:10:19.068223 containerd[1569]: time="2026-01-23T01:10:19.067983989Z" level=info msg="StartContainer for \"e75fdd3a64883004c32ae43b55d848895a7e4f5ce350e5aa3b5d4908803db55f\" returns successfully" Jan 23 01:10:19.071191 systemd-networkd[1460]: cali79ee2a58aa5: Gained IPv6LL Jan 23 01:10:19.071625 systemd-networkd[1460]: cali297406b9274: Gained IPv6LL Jan 23 01:10:19.138189 containerd[1569]: time="2026-01-23T01:10:19.138123056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-4z2jg,Uid:ae9746a2-a617-45a0-ab4a-8c3ff369f251,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:10:19.386518 systemd-networkd[1460]: calid88c8e89ce4: Link UP Jan 23 01:10:19.386882 systemd-networkd[1460]: calid88c8e89ce4: Gained carrier Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.240 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0 calico-apiserver-5d785d7599- calico-apiserver ae9746a2-a617-45a0-ab4a-8c3ff369f251 834 0 2026-01-23 01:09:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d785d7599 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba calico-apiserver-5d785d7599-4z2jg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid88c8e89ce4 [] [] }} ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.241 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.315 [INFO][4609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" HandleID="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.315 [INFO][4609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" HandleID="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"calico-apiserver-5d785d7599-4z2jg", "timestamp":"2026-01-23 01:10:19.315114708 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.315 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.316 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.316 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.327 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.333 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.341 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.345 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.349 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.349 [INFO][4609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.351 [INFO][4609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2 Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.359 [INFO][4609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.420203 containerd[1569]: 2026-01-23 01:10:19.371 [INFO][4609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.71/26] block=192.168.114.64/26 handle="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.371 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.71/26] handle="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.371 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.371 [INFO][4609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.71/26] IPv6=[] ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" HandleID="k8s-pod-network.678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.376 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0", GenerateName:"calico-apiserver-5d785d7599-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae9746a2-a617-45a0-ab4a-8c3ff369f251", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d785d7599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"calico-apiserver-5d785d7599-4z2jg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid88c8e89ce4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.377 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.71/32] ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.377 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid88c8e89ce4 ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.421547 containerd[1569]: 2026-01-23 01:10:19.385 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.422674 containerd[1569]: 2026-01-23 01:10:19.386 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0", GenerateName:"calico-apiserver-5d785d7599-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae9746a2-a617-45a0-ab4a-8c3ff369f251", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d785d7599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2", Pod:"calico-apiserver-5d785d7599-4z2jg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid88c8e89ce4", MAC:"1a:0d:3c:15:d5:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:19.422674 containerd[1569]: 2026-01-23 01:10:19.408 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" Namespace="calico-apiserver" Pod="calico-apiserver-5d785d7599-4z2jg" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-calico--apiserver--5d785d7599--4z2jg-eth0" Jan 23 01:10:19.446945 kubelet[2789]: E0123 01:10:19.446649 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:10:19.449147 kubelet[2789]: E0123 01:10:19.447442 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:19.450605 kubelet[2789]: E0123 01:10:19.450514 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:19.491252 containerd[1569]: time="2026-01-23T01:10:19.491038376Z" level=info msg="connecting to shim 678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2" address="unix:///run/containerd/s/747c09782942ce56104f55dc1c1833284c8ebd3f8d18419793f35f2592f45204" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:19.545681 kubelet[2789]: I0123 01:10:19.545604 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8qp5z" podStartSLOduration=43.545575669 podStartE2EDuration="43.545575669s" podCreationTimestamp="2026-01-23 01:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:19.542838723 +0000 UTC m=+48.615347587" watchObservedRunningTime="2026-01-23 01:10:19.545575669 +0000 UTC m=+48.618084534" Jan 23 01:10:19.580263 systemd[1]: Started cri-containerd-678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2.scope - libcontainer container 678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2. Jan 23 01:10:19.901416 containerd[1569]: time="2026-01-23T01:10:19.901343545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d785d7599-4z2jg,Uid:ae9746a2-a617-45a0-ab4a-8c3ff369f251,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"678310271116df9ee0eddf8556ad4a4069c2372e27ab7325dc20e147035fffa2\"" Jan 23 01:10:19.907203 containerd[1569]: time="2026-01-23T01:10:19.906839534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:10:20.023709 systemd-networkd[1460]: vxlan.calico: Link UP Jan 23 01:10:20.023723 systemd-networkd[1460]: vxlan.calico: Gained carrier Jan 23 01:10:20.070199 containerd[1569]: time="2026-01-23T01:10:20.069649238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:20.071945 containerd[1569]: time="2026-01-23T01:10:20.071559647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:10:20.072280 containerd[1569]: time="2026-01-23T01:10:20.072148659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:20.073933 kubelet[2789]: E0123 01:10:20.073800 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:20.073933 kubelet[2789]: E0123 01:10:20.073868 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:20.075427 kubelet[2789]: E0123 01:10:20.075167 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-4z2jg_calico-apiserver(ae9746a2-a617-45a0-ab4a-8c3ff369f251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:20.077067 kubelet[2789]: E0123 01:10:20.076949 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:10:20.138425 containerd[1569]: time="2026-01-23T01:10:20.137872405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l6md5,Uid:5ce5d673-fa45-45a7-8888-a98708c476b0,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:20.351192 systemd-networkd[1460]: cali8b5e20b64a0: Gained IPv6LL Jan 23 01:10:20.370217 systemd-networkd[1460]: cali535ae1d1121: Link UP Jan 23 01:10:20.373119 systemd-networkd[1460]: cali535ae1d1121: Gained carrier Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.222 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0 coredns-668d6bf9bc- kube-system 5ce5d673-fa45-45a7-8888-a98708c476b0 827 0 2026-01-23 01:09:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba coredns-668d6bf9bc-l6md5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali535ae1d1121 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.222 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.302 [INFO][4719] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" HandleID="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.303 [INFO][4719] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" HandleID="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", "pod":"coredns-668d6bf9bc-l6md5", "timestamp":"2026-01-23 01:10:20.302928914 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.303 [INFO][4719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.303 [INFO][4719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.303 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba' Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.315 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.321 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.327 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.330 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.334 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.64/26 host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.334 [INFO][4719] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.64/26 handle="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.336 [INFO][4719] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.342 [INFO][4719] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.64/26 handle="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.356 [INFO][4719] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.72/26] block=192.168.114.64/26 handle="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.356 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.72/26] handle="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" host="ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba" Jan 23 01:10:20.404235 containerd[1569]: 2026-01-23 01:10:20.357 [INFO][4719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:10:20.405315 containerd[1569]: 2026-01-23 01:10:20.357 [INFO][4719] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.72/26] IPv6=[] ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" HandleID="k8s-pod-network.c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Workload="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.405315 containerd[1569]: 2026-01-23 01:10:20.362 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ce5d673-fa45-45a7-8888-a98708c476b0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"", Pod:"coredns-668d6bf9bc-l6md5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535ae1d1121", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:20.405315 containerd[1569]: 2026-01-23 01:10:20.362 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.72/32] ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.405315 containerd[1569]: 2026-01-23 01:10:20.362 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali535ae1d1121 ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.405315 containerd[1569]: 2026-01-23 01:10:20.372 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.405712 containerd[1569]: 2026-01-23 01:10:20.372 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ce5d673-fa45-45a7-8888-a98708c476b0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-8ec31cc6d937c483eeba", ContainerID:"c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a", Pod:"coredns-668d6bf9bc-l6md5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali535ae1d1121", MAC:"46:eb:8c:ab:52:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:10:20.405712 containerd[1569]: 2026-01-23 01:10:20.396 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" Namespace="kube-system" Pod="coredns-668d6bf9bc-l6md5" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--8ec31cc6d937c483eeba-k8s-coredns--668d6bf9bc--l6md5-eth0" Jan 23 01:10:20.448038 containerd[1569]: time="2026-01-23T01:10:20.447785647Z" level=info msg="connecting to shim c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a" address="unix:///run/containerd/s/a4c0b81ec2c297ce72f45679b0dd6cb3b78cd46ee8e45e7bca2f18934f3d620e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:20.463471 kubelet[2789]: E0123 01:10:20.463389 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:10:20.526523 systemd[1]: Started cri-containerd-c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a.scope - libcontainer container c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a. Jan 23 01:10:20.543388 systemd-networkd[1460]: calid88c8e89ce4: Gained IPv6LL Jan 23 01:10:20.631405 containerd[1569]: time="2026-01-23T01:10:20.629900348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l6md5,Uid:5ce5d673-fa45-45a7-8888-a98708c476b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a\"" Jan 23 01:10:20.638941 containerd[1569]: time="2026-01-23T01:10:20.638603776Z" level=info msg="CreateContainer within sandbox \"c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:10:20.658190 containerd[1569]: time="2026-01-23T01:10:20.658140510Z" level=info msg="Container 5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:20.669809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565486916.mount: Deactivated successfully. Jan 23 01:10:20.670729 containerd[1569]: time="2026-01-23T01:10:20.670536369Z" level=info msg="CreateContainer within sandbox \"c702ecd3e4fc4b612a460baba09b42d7ceee0c59674e4be12cf35a3573ee8b4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e\"" Jan 23 01:10:20.673638 containerd[1569]: time="2026-01-23T01:10:20.673477853Z" level=info msg="StartContainer for \"5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e\"" Jan 23 01:10:20.677949 containerd[1569]: time="2026-01-23T01:10:20.677843576Z" level=info msg="connecting to shim 5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e" address="unix:///run/containerd/s/a4c0b81ec2c297ce72f45679b0dd6cb3b78cd46ee8e45e7bca2f18934f3d620e" protocol=ttrpc version=3 Jan 23 01:10:20.724167 systemd[1]: Started cri-containerd-5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e.scope - libcontainer container 5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e. Jan 23 01:10:20.803766 containerd[1569]: time="2026-01-23T01:10:20.803710879Z" level=info msg="StartContainer for \"5656d5a1dcb15b0974d06c775ce01fc37748eef1197ed43268da021eb211898e\" returns successfully" Jan 23 01:10:21.119497 systemd-networkd[1460]: vxlan.calico: Gained IPv6LL Jan 23 01:10:21.465026 kubelet[2789]: E0123 01:10:21.464944 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:10:21.496135 kubelet[2789]: I0123 01:10:21.496052 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l6md5" podStartSLOduration=45.496023781 podStartE2EDuration="45.496023781s" podCreationTimestamp="2026-01-23 01:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:21.49040047 +0000 UTC m=+50.562909366" watchObservedRunningTime="2026-01-23 01:10:21.496023781 +0000 UTC m=+50.568532646" Jan 23 01:10:22.335214 systemd-networkd[1460]: cali535ae1d1121: Gained IPv6LL Jan 23 01:10:25.142229 containerd[1569]: time="2026-01-23T01:10:25.142117430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:10:25.303417 containerd[1569]: time="2026-01-23T01:10:25.303070850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:25.304682 containerd[1569]: time="2026-01-23T01:10:25.304624683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:10:25.305393 containerd[1569]: time="2026-01-23T01:10:25.304741759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:10:25.305803 kubelet[2789]: E0123 01:10:25.305721 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:25.307767 kubelet[2789]: E0123 01:10:25.305986 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:25.307767 kubelet[2789]: E0123 01:10:25.306171 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d58f6313d7924c0db85c786610a81b82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:25.310622 containerd[1569]: time="2026-01-23T01:10:25.310561184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:10:25.327502 ntpd[1671]: Listen normally on 6 vxlan.calico 192.168.114.64:123 Jan 23 01:10:25.327619 ntpd[1671]: Listen normally on 7 cali4a1de1b79b4 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 6 vxlan.calico 192.168.114.64:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 7 cali4a1de1b79b4 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 8 cali7742db83306 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 9 calie3e851d3222 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 10 cali297406b9274 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 11 cali79ee2a58aa5 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 12 cali8b5e20b64a0 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 13 calid88c8e89ce4 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 14 vxlan.calico [fe80::64cf:ddff:fe77:ceb6%11]:123 Jan 23 01:10:25.328276 ntpd[1671]: 23 Jan 01:10:25 ntpd[1671]: Listen normally on 15 cali535ae1d1121 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 01:10:25.327679 ntpd[1671]: Listen normally on 8 cali7742db83306 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 23 01:10:25.327722 ntpd[1671]: Listen normally on 9 calie3e851d3222 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 01:10:25.327763 ntpd[1671]: Listen normally on 10 cali297406b9274 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 01:10:25.327805 ntpd[1671]: Listen normally on 11 cali79ee2a58aa5 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 01:10:25.327844 ntpd[1671]: Listen normally on 12 cali8b5e20b64a0 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 01:10:25.327885 ntpd[1671]: Listen normally on 13 calid88c8e89ce4 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 01:10:25.327959 ntpd[1671]: Listen normally on 14 vxlan.calico [fe80::64cf:ddff:fe77:ceb6%11]:123 Jan 23 01:10:25.328002 ntpd[1671]: Listen normally on 15 cali535ae1d1121 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 01:10:25.479149 containerd[1569]: time="2026-01-23T01:10:25.478973689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:25.480650 containerd[1569]: time="2026-01-23T01:10:25.480588777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:10:25.480817 containerd[1569]: time="2026-01-23T01:10:25.480691028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:10:25.481056 kubelet[2789]: E0123 01:10:25.481004 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:25.481181 kubelet[2789]: E0123 01:10:25.481069 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:25.481292 kubelet[2789]: E0123 01:10:25.481227 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:25.483019 kubelet[2789]: E0123 01:10:25.482956 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:10:31.139856 containerd[1569]: time="2026-01-23T01:10:31.139647825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:10:31.308979 containerd[1569]: time="2026-01-23T01:10:31.308872561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:31.311107 containerd[1569]: time="2026-01-23T01:10:31.311011349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:10:31.311107 containerd[1569]: time="2026-01-23T01:10:31.311069983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:31.311400 kubelet[2789]: E0123 01:10:31.311306 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:31.311400 kubelet[2789]: E0123 01:10:31.311391 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:31.311966 kubelet[2789]: E0123 01:10:31.311579 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rwh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-c6hvd_calico-apiserver(107c88ad-a23e-4977-926b-0153678bb502): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:31.313415 kubelet[2789]: E0123 01:10:31.313338 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:33.141318 containerd[1569]: time="2026-01-23T01:10:33.141269988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:10:33.295186 containerd[1569]: time="2026-01-23T01:10:33.295118655Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:33.297004 containerd[1569]: time="2026-01-23T01:10:33.296891421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:10:33.297281 containerd[1569]: time="2026-01-23T01:10:33.296929240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:10:33.297379 kubelet[2789]: E0123 01:10:33.297235 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:10:33.297379 kubelet[2789]: E0123 01:10:33.297304 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:10:33.298974 kubelet[2789]: E0123 01:10:33.298434 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7456665ddf-rb6bt_calico-system(064b559c-bfe9-4534-b533-689a0c2791a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:33.299326 containerd[1569]: time="2026-01-23T01:10:33.298188272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:10:33.300217 kubelet[2789]: E0123 01:10:33.300145 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:10:33.454829 containerd[1569]: time="2026-01-23T01:10:33.454520774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:33.456763 containerd[1569]: time="2026-01-23T01:10:33.456700544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:10:33.456763 containerd[1569]: time="2026-01-23T01:10:33.456718304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:10:33.457253 kubelet[2789]: E0123 01:10:33.457061 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:10:33.457404 kubelet[2789]: E0123 01:10:33.457257 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:10:33.457517 kubelet[2789]: E0123 01:10:33.457451 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:33.460933 containerd[1569]: time="2026-01-23T01:10:33.460875353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:10:33.614803 containerd[1569]: time="2026-01-23T01:10:33.614606516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:33.616291 containerd[1569]: time="2026-01-23T01:10:33.616153752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:10:33.616291 containerd[1569]: time="2026-01-23T01:10:33.616207739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:10:33.616594 kubelet[2789]: E0123 01:10:33.616528 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:10:33.616742 kubelet[2789]: E0123 01:10:33.616600 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:10:33.616845 kubelet[2789]: E0123 01:10:33.616777 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:33.618526 kubelet[2789]: E0123 01:10:33.618458 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:34.138742 containerd[1569]: time="2026-01-23T01:10:34.138606420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:10:34.300150 containerd[1569]: time="2026-01-23T01:10:34.300081384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:34.301815 containerd[1569]: time="2026-01-23T01:10:34.301677427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:10:34.301815 containerd[1569]: time="2026-01-23T01:10:34.301736310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:34.302123 kubelet[2789]: E0123 01:10:34.302020 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:34.302561 kubelet[2789]: E0123 01:10:34.302134 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:34.302561 kubelet[2789]: E0123 01:10:34.302482 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-4z2jg_calico-apiserver(ae9746a2-a617-45a0-ab4a-8c3ff369f251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:34.304215 containerd[1569]: time="2026-01-23T01:10:34.304118751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:10:34.304457 kubelet[2789]: E0123 01:10:34.304137 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:10:34.467991 containerd[1569]: time="2026-01-23T01:10:34.467244184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:34.469044 containerd[1569]: time="2026-01-23T01:10:34.468866963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:10:34.469044 containerd[1569]: time="2026-01-23T01:10:34.468993861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:34.469487 kubelet[2789]: E0123 01:10:34.469431 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:10:34.469622 kubelet[2789]: E0123 01:10:34.469502 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:10:34.470379 kubelet[2789]: E0123 01:10:34.470257 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rx4sc_calico-system(f817b277-502c-42fa-96de-77b7a2b164dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:34.481208 kubelet[2789]: E0123 01:10:34.481025 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:10:39.142113 kubelet[2789]: E0123 01:10:39.141691 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:10:46.138827 kubelet[2789]: E0123 01:10:46.138278 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:47.146658 kubelet[2789]: E0123 01:10:47.146592 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:10:47.149812 kubelet[2789]: E0123 01:10:47.149700 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:10:48.139622 kubelet[2789]: E0123 01:10:48.139556 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:10:48.142861 kubelet[2789]: E0123 01:10:48.140264 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:10:50.140791 containerd[1569]: time="2026-01-23T01:10:50.140093061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:10:50.312159 containerd[1569]: time="2026-01-23T01:10:50.311893644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:50.314216 containerd[1569]: time="2026-01-23T01:10:50.314138945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:10:50.314386 containerd[1569]: time="2026-01-23T01:10:50.314279732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:10:50.314586 kubelet[2789]: E0123 01:10:50.314531 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:50.316331 kubelet[2789]: E0123 01:10:50.314605 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:50.316331 kubelet[2789]: E0123 01:10:50.314778 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d58f6313d7924c0db85c786610a81b82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:50.317659 containerd[1569]: time="2026-01-23T01:10:50.317619881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:10:50.478557 containerd[1569]: time="2026-01-23T01:10:50.476964620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:50.479547 containerd[1569]: time="2026-01-23T01:10:50.479442372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:10:50.479968 containerd[1569]: time="2026-01-23T01:10:50.479494461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:10:50.481021 kubelet[2789]: E0123 01:10:50.480284 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:50.481021 kubelet[2789]: E0123 01:10:50.480348 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:50.481021 kubelet[2789]: E0123 01:10:50.480506 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:50.482218 kubelet[2789]: E0123 01:10:50.482162 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:10:57.142936 containerd[1569]: time="2026-01-23T01:10:57.141681942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:10:57.304372 containerd[1569]: time="2026-01-23T01:10:57.304130352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:57.306572 containerd[1569]: time="2026-01-23T01:10:57.306388191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:10:57.306572 containerd[1569]: time="2026-01-23T01:10:57.306515282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:57.308024 kubelet[2789]: E0123 01:10:57.306730 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:57.308024 kubelet[2789]: E0123 01:10:57.306799 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:57.308588 kubelet[2789]: E0123 01:10:57.308068 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rwh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-c6hvd_calico-apiserver(107c88ad-a23e-4977-926b-0153678bb502): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:57.310007 kubelet[2789]: E0123 01:10:57.309957 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:10:59.724679 systemd[1]: Started sshd@7-10.128.0.88:22-4.153.228.146:36756.service - OpenSSH per-connection server daemon (4.153.228.146:36756). Jan 23 01:10:59.988060 sshd[4924]: Accepted publickey for core from 4.153.228.146 port 36756 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:10:59.991713 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:00.003418 systemd-logind[1542]: New session 8 of user core. Jan 23 01:11:00.013161 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:11:00.320964 sshd[4927]: Connection closed by 4.153.228.146 port 36756 Jan 23 01:11:00.322240 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:00.335156 systemd[1]: sshd@7-10.128.0.88:22-4.153.228.146:36756.service: Deactivated successfully. Jan 23 01:11:00.340660 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:11:00.344097 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:11:00.347445 systemd-logind[1542]: Removed session 8. Jan 23 01:11:01.145259 kubelet[2789]: E0123 01:11:01.145122 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:11:02.147039 containerd[1569]: time="2026-01-23T01:11:02.144982234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:11:02.318103 containerd[1569]: time="2026-01-23T01:11:02.317961949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:02.320831 containerd[1569]: time="2026-01-23T01:11:02.320676810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:11:02.320831 containerd[1569]: time="2026-01-23T01:11:02.320789282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:11:02.321380 kubelet[2789]: E0123 01:11:02.320991 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:11:02.321380 kubelet[2789]: E0123 01:11:02.321301 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:11:02.323344 kubelet[2789]: E0123 01:11:02.321630 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7456665ddf-rb6bt_calico-system(064b559c-bfe9-4534-b533-689a0c2791a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:02.323344 kubelet[2789]: E0123 01:11:02.323091 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:11:02.324145 containerd[1569]: time="2026-01-23T01:11:02.323393310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:11:02.486544 containerd[1569]: time="2026-01-23T01:11:02.486372313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:02.489439 containerd[1569]: time="2026-01-23T01:11:02.489367385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:11:02.490920 containerd[1569]: time="2026-01-23T01:11:02.489416536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:02.491019 kubelet[2789]: E0123 01:11:02.489723 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:11:02.491019 kubelet[2789]: E0123 01:11:02.489784 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:11:02.491019 kubelet[2789]: E0123 01:11:02.490173 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rx4sc_calico-system(f817b277-502c-42fa-96de-77b7a2b164dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:02.491938 containerd[1569]: time="2026-01-23T01:11:02.491882501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:02.492389 kubelet[2789]: E0123 01:11:02.492344 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:11:02.659786 containerd[1569]: time="2026-01-23T01:11:02.659708373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:02.661357 containerd[1569]: time="2026-01-23T01:11:02.661196530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:02.661357 containerd[1569]: time="2026-01-23T01:11:02.661317505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:02.661938 kubelet[2789]: E0123 01:11:02.661819 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:02.661938 kubelet[2789]: E0123 01:11:02.661888 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:02.662495 kubelet[2789]: E0123 01:11:02.662427 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:02.664224 containerd[1569]: time="2026-01-23T01:11:02.663939462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:11:02.826064 containerd[1569]: time="2026-01-23T01:11:02.825152875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:02.827852 containerd[1569]: time="2026-01-23T01:11:02.827672880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:11:02.827852 containerd[1569]: time="2026-01-23T01:11:02.827807956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:02.829944 kubelet[2789]: E0123 01:11:02.828286 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:02.829944 kubelet[2789]: E0123 01:11:02.828351 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:02.829944 kubelet[2789]: E0123 01:11:02.828632 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-4z2jg_calico-apiserver(ae9746a2-a617-45a0-ab4a-8c3ff369f251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:02.831209 kubelet[2789]: E0123 01:11:02.830451 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:11:02.831385 containerd[1569]: time="2026-01-23T01:11:02.830859010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:02.990196 containerd[1569]: time="2026-01-23T01:11:02.989957691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:02.991656 containerd[1569]: time="2026-01-23T01:11:02.991494914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:02.991656 containerd[1569]: time="2026-01-23T01:11:02.991619491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:02.993725 kubelet[2789]: E0123 01:11:02.993023 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:02.993725 kubelet[2789]: E0123 01:11:02.993086 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:02.993725 kubelet[2789]: E0123 01:11:02.993260 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:02.994946 kubelet[2789]: E0123 01:11:02.994865 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:11:05.369307 systemd[1]: Started sshd@8-10.128.0.88:22-4.153.228.146:52682.service - OpenSSH per-connection server daemon (4.153.228.146:52682). Jan 23 01:11:05.615054 sshd[4948]: Accepted publickey for core from 4.153.228.146 port 52682 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:05.618244 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:05.630851 systemd-logind[1542]: New session 9 of user core. Jan 23 01:11:05.638169 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:11:05.970486 sshd[4951]: Connection closed by 4.153.228.146 port 52682 Jan 23 01:11:05.972095 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:05.980526 systemd[1]: sshd@8-10.128.0.88:22-4.153.228.146:52682.service: Deactivated successfully. Jan 23 01:11:05.986015 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:11:05.988425 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:11:05.993200 systemd-logind[1542]: Removed session 9. Jan 23 01:11:11.019782 systemd[1]: Started sshd@9-10.128.0.88:22-4.153.228.146:52694.service - OpenSSH per-connection server daemon (4.153.228.146:52694). Jan 23 01:11:11.145485 kubelet[2789]: E0123 01:11:11.145432 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:11:11.302852 sshd[4965]: Accepted publickey for core from 4.153.228.146 port 52694 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:11.306049 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:11.316701 systemd-logind[1542]: New session 10 of user core. Jan 23 01:11:11.322257 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:11:11.609716 sshd[4968]: Connection closed by 4.153.228.146 port 52694 Jan 23 01:11:11.610830 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:11.624772 systemd[1]: sshd@9-10.128.0.88:22-4.153.228.146:52694.service: Deactivated successfully. Jan 23 01:11:11.630066 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:11:11.633334 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:11:11.637153 systemd-logind[1542]: Removed session 10. Jan 23 01:11:11.656716 systemd[1]: Started sshd@10-10.128.0.88:22-4.153.228.146:52698.service - OpenSSH per-connection server daemon (4.153.228.146:52698). Jan 23 01:11:11.918025 sshd[4981]: Accepted publickey for core from 4.153.228.146 port 52698 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:11.922435 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:11.934019 systemd-logind[1542]: New session 11 of user core. Jan 23 01:11:11.941171 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:11:12.326570 sshd[4984]: Connection closed by 4.153.228.146 port 52698 Jan 23 01:11:12.328201 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:12.341630 systemd[1]: sshd@10-10.128.0.88:22-4.153.228.146:52698.service: Deactivated successfully. Jan 23 01:11:12.349839 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:11:12.353454 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:11:12.376230 systemd[1]: Started sshd@11-10.128.0.88:22-4.153.228.146:52706.service - OpenSSH per-connection server daemon (4.153.228.146:52706). Jan 23 01:11:12.382670 systemd-logind[1542]: Removed session 11. Jan 23 01:11:12.639874 sshd[4994]: Accepted publickey for core from 4.153.228.146 port 52706 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:12.642723 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:12.653118 systemd-logind[1542]: New session 12 of user core. Jan 23 01:11:12.659204 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:11:12.965425 sshd[4997]: Connection closed by 4.153.228.146 port 52706 Jan 23 01:11:12.967247 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:12.979851 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:11:12.981822 systemd[1]: sshd@11-10.128.0.88:22-4.153.228.146:52706.service: Deactivated successfully. Jan 23 01:11:12.991844 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:11:12.999791 systemd-logind[1542]: Removed session 12. Jan 23 01:11:14.140415 kubelet[2789]: E0123 01:11:14.140333 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:11:15.142250 kubelet[2789]: E0123 01:11:15.142184 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:11:16.139682 kubelet[2789]: E0123 01:11:16.139010 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:11:17.145937 kubelet[2789]: E0123 01:11:17.145458 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:11:18.012934 systemd[1]: Started sshd@12-10.128.0.88:22-4.153.228.146:48082.service - OpenSSH per-connection server daemon (4.153.228.146:48082). Jan 23 01:11:18.141083 kubelet[2789]: E0123 01:11:18.140999 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:11:18.275184 sshd[5037]: Accepted publickey for core from 4.153.228.146 port 48082 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:18.277525 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:18.285744 systemd-logind[1542]: New session 13 of user core. Jan 23 01:11:18.295157 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:11:18.598507 sshd[5040]: Connection closed by 4.153.228.146 port 48082 Jan 23 01:11:18.599230 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:18.610876 systemd[1]: sshd@12-10.128.0.88:22-4.153.228.146:48082.service: Deactivated successfully. Jan 23 01:11:18.615950 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:11:18.621683 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:11:18.624687 systemd-logind[1542]: Removed session 13. Jan 23 01:11:23.647763 systemd[1]: Started sshd@13-10.128.0.88:22-4.153.228.146:48096.service - OpenSSH per-connection server daemon (4.153.228.146:48096). Jan 23 01:11:23.890645 sshd[5052]: Accepted publickey for core from 4.153.228.146 port 48096 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:23.893226 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:23.904264 systemd-logind[1542]: New session 14 of user core. Jan 23 01:11:23.910196 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:11:24.140102 kubelet[2789]: E0123 01:11:24.140005 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:11:24.168947 sshd[5055]: Connection closed by 4.153.228.146 port 48096 Jan 23 01:11:24.169753 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:24.177253 systemd[1]: sshd@13-10.128.0.88:22-4.153.228.146:48096.service: Deactivated successfully. Jan 23 01:11:24.183804 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:11:24.192883 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:11:24.196471 systemd-logind[1542]: Removed session 14. Jan 23 01:11:26.139520 kubelet[2789]: E0123 01:11:26.139441 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:11:28.139736 kubelet[2789]: E0123 01:11:28.139478 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:11:29.140654 kubelet[2789]: E0123 01:11:29.140169 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:11:29.223608 systemd[1]: Started sshd@14-10.128.0.88:22-4.153.228.146:48434.service - OpenSSH per-connection server daemon (4.153.228.146:48434). Jan 23 01:11:29.515974 sshd[5069]: Accepted publickey for core from 4.153.228.146 port 48434 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:29.518320 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:29.528582 systemd-logind[1542]: New session 15 of user core. Jan 23 01:11:29.534191 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:11:29.825113 sshd[5072]: Connection closed by 4.153.228.146 port 48434 Jan 23 01:11:29.827092 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:29.838693 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:11:29.840263 systemd[1]: sshd@14-10.128.0.88:22-4.153.228.146:48434.service: Deactivated successfully. Jan 23 01:11:29.848341 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:11:29.858239 systemd-logind[1542]: Removed session 15. Jan 23 01:11:31.141852 kubelet[2789]: E0123 01:11:31.141764 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:11:32.144579 kubelet[2789]: E0123 01:11:32.144461 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:11:34.875540 systemd[1]: Started sshd@15-10.128.0.88:22-4.153.228.146:39864.service - OpenSSH per-connection server daemon (4.153.228.146:39864). Jan 23 01:11:35.156889 sshd[5086]: Accepted publickey for core from 4.153.228.146 port 39864 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:35.159798 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:35.169985 systemd-logind[1542]: New session 16 of user core. Jan 23 01:11:35.177333 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:11:35.542871 sshd[5089]: Connection closed by 4.153.228.146 port 39864 Jan 23 01:11:35.546376 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:35.555468 systemd[1]: sshd@15-10.128.0.88:22-4.153.228.146:39864.service: Deactivated successfully. Jan 23 01:11:35.556028 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:11:35.561183 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:11:35.567218 systemd-logind[1542]: Removed session 16. Jan 23 01:11:35.597259 systemd[1]: Started sshd@16-10.128.0.88:22-4.153.228.146:39880.service - OpenSSH per-connection server daemon (4.153.228.146:39880). Jan 23 01:11:35.875676 sshd[5101]: Accepted publickey for core from 4.153.228.146 port 39880 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:35.878866 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:35.891135 systemd-logind[1542]: New session 17 of user core. Jan 23 01:11:35.895600 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:11:36.308293 sshd[5104]: Connection closed by 4.153.228.146 port 39880 Jan 23 01:11:36.310044 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:36.319886 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:11:36.321609 systemd[1]: sshd@16-10.128.0.88:22-4.153.228.146:39880.service: Deactivated successfully. Jan 23 01:11:36.328285 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:11:36.334630 systemd-logind[1542]: Removed session 17. Jan 23 01:11:36.359299 systemd[1]: Started sshd@17-10.128.0.88:22-4.153.228.146:39884.service - OpenSSH per-connection server daemon (4.153.228.146:39884). Jan 23 01:11:36.648244 sshd[5114]: Accepted publickey for core from 4.153.228.146 port 39884 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:36.650563 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:36.662020 systemd-logind[1542]: New session 18 of user core. Jan 23 01:11:36.669338 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:11:37.880866 sshd[5117]: Connection closed by 4.153.228.146 port 39884 Jan 23 01:11:37.881799 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:37.891870 systemd[1]: sshd@17-10.128.0.88:22-4.153.228.146:39884.service: Deactivated successfully. Jan 23 01:11:37.898653 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:11:37.905891 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:11:37.909847 systemd-logind[1542]: Removed session 18. Jan 23 01:11:37.933038 systemd[1]: Started sshd@18-10.128.0.88:22-4.153.228.146:39894.service - OpenSSH per-connection server daemon (4.153.228.146:39894). Jan 23 01:11:38.218044 sshd[5137]: Accepted publickey for core from 4.153.228.146 port 39894 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:38.220726 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:38.238867 systemd-logind[1542]: New session 19 of user core. Jan 23 01:11:38.240358 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:11:38.743673 sshd[5141]: Connection closed by 4.153.228.146 port 39894 Jan 23 01:11:38.746216 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:38.755860 systemd[1]: sshd@18-10.128.0.88:22-4.153.228.146:39894.service: Deactivated successfully. Jan 23 01:11:38.761847 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:11:38.764400 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:11:38.768422 systemd-logind[1542]: Removed session 19. Jan 23 01:11:38.797269 systemd[1]: Started sshd@19-10.128.0.88:22-4.153.228.146:39898.service - OpenSSH per-connection server daemon (4.153.228.146:39898). Jan 23 01:11:39.056816 sshd[5151]: Accepted publickey for core from 4.153.228.146 port 39898 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:39.059438 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:39.070689 systemd-logind[1542]: New session 20 of user core. Jan 23 01:11:39.073706 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:11:39.141963 containerd[1569]: time="2026-01-23T01:11:39.141897926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:11:39.143646 kubelet[2789]: E0123 01:11:39.142429 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:11:39.319582 containerd[1569]: time="2026-01-23T01:11:39.319433582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:39.321893 containerd[1569]: time="2026-01-23T01:11:39.321729282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:11:39.321893 containerd[1569]: time="2026-01-23T01:11:39.321851996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:11:39.323936 kubelet[2789]: E0123 01:11:39.323055 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:11:39.323936 kubelet[2789]: E0123 01:11:39.323120 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:11:39.323936 kubelet[2789]: E0123 01:11:39.323472 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d58f6313d7924c0db85c786610a81b82,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:39.324927 containerd[1569]: time="2026-01-23T01:11:39.324516935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:11:39.366575 sshd[5154]: Connection closed by 4.153.228.146 port 39898 Jan 23 01:11:39.370436 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:39.378189 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:11:39.379200 systemd[1]: sshd@19-10.128.0.88:22-4.153.228.146:39898.service: Deactivated successfully. Jan 23 01:11:39.386283 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:11:39.394125 systemd-logind[1542]: Removed session 20. Jan 23 01:11:39.483519 containerd[1569]: time="2026-01-23T01:11:39.482430863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:39.484774 containerd[1569]: time="2026-01-23T01:11:39.484609754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:11:39.484774 containerd[1569]: time="2026-01-23T01:11:39.484732880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:39.486117 kubelet[2789]: E0123 01:11:39.485181 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:39.486117 kubelet[2789]: E0123 01:11:39.485269 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:39.486117 kubelet[2789]: E0123 01:11:39.485590 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rwh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-c6hvd_calico-apiserver(107c88ad-a23e-4977-926b-0153678bb502): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:39.487547 containerd[1569]: time="2026-01-23T01:11:39.487163308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:11:39.488206 kubelet[2789]: E0123 01:11:39.487875 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:11:39.645298 containerd[1569]: time="2026-01-23T01:11:39.645054244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:39.647441 containerd[1569]: time="2026-01-23T01:11:39.647216130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:11:39.647441 containerd[1569]: time="2026-01-23T01:11:39.647341000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:11:39.648765 kubelet[2789]: E0123 01:11:39.647841 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:11:39.648765 kubelet[2789]: E0123 01:11:39.648154 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:11:39.648765 kubelet[2789]: E0123 01:11:39.648335 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s42pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59dcfd5f76-f9f85_calico-system(2c3341c5-23de-41f7-b063-be3670a7e004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:39.650289 kubelet[2789]: E0123 01:11:39.650238 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:11:41.145513 kubelet[2789]: E0123 01:11:41.145451 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:11:42.140877 kubelet[2789]: E0123 01:11:42.140806 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc" Jan 23 01:11:44.413662 systemd[1]: Started sshd@20-10.128.0.88:22-4.153.228.146:39910.service - OpenSSH per-connection server daemon (4.153.228.146:39910). Jan 23 01:11:44.696056 sshd[5199]: Accepted publickey for core from 4.153.228.146 port 39910 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:44.699651 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:44.714240 systemd-logind[1542]: New session 21 of user core. Jan 23 01:11:44.722074 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:11:45.000033 sshd[5202]: Connection closed by 4.153.228.146 port 39910 Jan 23 01:11:45.000884 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:45.015166 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:11:45.017213 systemd[1]: sshd@20-10.128.0.88:22-4.153.228.146:39910.service: Deactivated successfully. Jan 23 01:11:45.021754 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:11:45.026133 systemd-logind[1542]: Removed session 21. Jan 23 01:11:47.140449 containerd[1569]: time="2026-01-23T01:11:47.140399110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:47.299621 containerd[1569]: time="2026-01-23T01:11:47.299536381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:47.301440 containerd[1569]: time="2026-01-23T01:11:47.301282434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:47.301440 containerd[1569]: time="2026-01-23T01:11:47.301399469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:47.302774 kubelet[2789]: E0123 01:11:47.301683 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:47.302774 kubelet[2789]: E0123 01:11:47.301865 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:47.302774 kubelet[2789]: E0123 01:11:47.302057 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:47.305325 containerd[1569]: time="2026-01-23T01:11:47.304960441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:47.465421 containerd[1569]: time="2026-01-23T01:11:47.465220677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:47.467577 containerd[1569]: time="2026-01-23T01:11:47.467510778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:47.467976 containerd[1569]: time="2026-01-23T01:11:47.467544511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:47.468137 kubelet[2789]: E0123 01:11:47.467884 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:47.468312 kubelet[2789]: E0123 01:11:47.468258 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:47.471023 kubelet[2789]: E0123 01:11:47.468536 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-47s7m_calico-system(88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:47.471384 kubelet[2789]: E0123 01:11:47.471332 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-47s7m" podUID="88a4fdb4-4d5f-4b12-aeb4-00fd6737d18e" Jan 23 01:11:50.051843 systemd[1]: Started sshd@21-10.128.0.88:22-4.153.228.146:48322.service - OpenSSH per-connection server daemon (4.153.228.146:48322). Jan 23 01:11:50.326054 sshd[5213]: Accepted publickey for core from 4.153.228.146 port 48322 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:50.328160 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:50.338083 systemd-logind[1542]: New session 22 of user core. Jan 23 01:11:50.346200 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:11:50.614826 sshd[5216]: Connection closed by 4.153.228.146 port 48322 Jan 23 01:11:50.616229 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:50.630226 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:11:50.630992 systemd[1]: sshd@21-10.128.0.88:22-4.153.228.146:48322.service: Deactivated successfully. Jan 23 01:11:50.635895 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:11:50.641497 systemd-logind[1542]: Removed session 22. Jan 23 01:11:51.147974 kubelet[2789]: E0123 01:11:51.147748 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59dcfd5f76-f9f85" podUID="2c3341c5-23de-41f7-b063-be3670a7e004" Jan 23 01:11:53.146697 kubelet[2789]: E0123 01:11:53.146233 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-c6hvd" podUID="107c88ad-a23e-4977-926b-0153678bb502" Jan 23 01:11:53.150012 containerd[1569]: time="2026-01-23T01:11:53.149129711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:11:53.313978 containerd[1569]: time="2026-01-23T01:11:53.311983496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:53.315847 containerd[1569]: time="2026-01-23T01:11:53.315759093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:11:53.317061 containerd[1569]: time="2026-01-23T01:11:53.315941463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:11:53.317329 kubelet[2789]: E0123 01:11:53.317281 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:11:53.317766 kubelet[2789]: E0123 01:11:53.317479 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:11:53.317766 kubelet[2789]: E0123 01:11:53.317689 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7456665ddf-rb6bt_calico-system(064b559c-bfe9-4534-b533-689a0c2791a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:53.319371 kubelet[2789]: E0123 01:11:53.319311 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7456665ddf-rb6bt" podUID="064b559c-bfe9-4534-b533-689a0c2791a2" Jan 23 01:11:54.139903 containerd[1569]: time="2026-01-23T01:11:54.139844034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:11:54.304264 containerd[1569]: time="2026-01-23T01:11:54.304151115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:54.306097 containerd[1569]: time="2026-01-23T01:11:54.306013301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:11:54.306259 containerd[1569]: time="2026-01-23T01:11:54.306192837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:54.306651 kubelet[2789]: E0123 01:11:54.306552 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:54.306651 kubelet[2789]: E0123 01:11:54.306622 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:54.307843 kubelet[2789]: E0123 01:11:54.307250 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d785d7599-4z2jg_calico-apiserver(ae9746a2-a617-45a0-ab4a-8c3ff369f251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:54.308750 kubelet[2789]: E0123 01:11:54.308691 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d785d7599-4z2jg" podUID="ae9746a2-a617-45a0-ab4a-8c3ff369f251" Jan 23 01:11:55.665225 systemd[1]: Started sshd@22-10.128.0.88:22-4.153.228.146:35652.service - OpenSSH per-connection server daemon (4.153.228.146:35652). Jan 23 01:11:55.923469 sshd[5235]: Accepted publickey for core from 4.153.228.146 port 35652 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:11:55.927262 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:55.939384 systemd-logind[1542]: New session 23 of user core. Jan 23 01:11:55.946886 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:11:56.248666 sshd[5238]: Connection closed by 4.153.228.146 port 35652 Jan 23 01:11:56.248501 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:56.263726 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:11:56.264678 systemd[1]: sshd@22-10.128.0.88:22-4.153.228.146:35652.service: Deactivated successfully. Jan 23 01:11:56.271248 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:11:56.277288 systemd-logind[1542]: Removed session 23. Jan 23 01:11:57.139801 containerd[1569]: time="2026-01-23T01:11:57.139740466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:11:57.299940 containerd[1569]: time="2026-01-23T01:11:57.299339061Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:57.301380 containerd[1569]: time="2026-01-23T01:11:57.301323980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:57.301713 containerd[1569]: time="2026-01-23T01:11:57.301559154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:11:57.302963 kubelet[2789]: E0123 01:11:57.302157 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:11:57.304358 kubelet[2789]: E0123 01:11:57.303731 2789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:11:57.304584 kubelet[2789]: E0123 01:11:57.304503 2789 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rx4sc_calico-system(f817b277-502c-42fa-96de-77b7a2b164dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:57.306226 kubelet[2789]: E0123 01:11:57.306152 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rx4sc" podUID="f817b277-502c-42fa-96de-77b7a2b164dc"