Jan 17 00:16:26.199215 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:16:26.199287 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:16:26.199307 kernel: BIOS-provided physical RAM map: Jan 17 00:16:26.199322 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:16:26.199337 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:16:26.199351 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:16:26.199369 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:16:26.199389 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:16:26.199423 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:16:26.199438 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:16:26.199462 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:16:26.199478 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:16:26.199493 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:16:26.199508 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:16:26.199535 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:16:26.199552 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:16:26.199569 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:16:26.202795 kernel: NX (Execute Disable) protection: active Jan 17 00:16:26.202843 kernel: APIC: Static calls initialized Jan 17 00:16:26.202858 kernel: efi: EFI v2.7 by EDK II Jan 17 00:16:26.202872 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:16:26.202888 kernel: SMBIOS 2.4 present. Jan 17 00:16:26.202905 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:16:26.202922 kernel: Hypervisor detected: KVM Jan 17 00:16:26.202953 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:16:26.202968 kernel: kvm-clock: using sched offset of 13578770291 cycles Jan 17 00:16:26.202984 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:16:26.202999 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:16:26.203014 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:16:26.203032 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:16:26.203047 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:16:26.203062 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:16:26.203076 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:16:26.203111 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:16:26.203128 kernel: Using GB pages for direct mapping Jan 17 00:16:26.203142 kernel: Secure boot disabled Jan 17 00:16:26.203155 kernel: ACPI: Early table checksum verification disabled Jan 17 00:16:26.203170 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:16:26.203185 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:16:26.203200 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:16:26.203225 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:16:26.203245 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:16:26.203261 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:16:26.203276 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:16:26.203291 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:16:26.203306 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:16:26.203321 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:16:26.203342 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:16:26.203358 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:16:26.203376 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:16:26.203394 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:16:26.203410 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:16:26.203425 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:16:26.203442 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:16:26.203472 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:16:26.203490 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:16:26.203716 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:16:26.203752 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:16:26.203772 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:16:26.203792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:16:26.203806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:16:26.203823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:16:26.203843 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:16:26.203862 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:16:26.203881 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 17 00:16:26.203916 kernel: Zone ranges: Jan 17 00:16:26.203934 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:16:26.203954 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:16:26.203972 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:16:26.203991 kernel: Movable zone start for each node Jan 17 00:16:26.204009 kernel: Early memory node ranges Jan 17 00:16:26.204028 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:16:26.204047 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:16:26.204066 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:16:26.204088 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:16:26.204104 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:16:26.204118 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:16:26.204135 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:16:26.204151 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:16:26.204167 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:16:26.204185 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:16:26.204203 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:16:26.204221 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:16:26.204244 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:16:26.204262 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:16:26.204280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:16:26.204298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:16:26.204315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:16:26.204333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:16:26.204351 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:16:26.204368 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:16:26.204384 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:16:26.204403 kernel: Booting paravirtualized kernel on KVM Jan 17 00:16:26.204421 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:16:26.204439 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:16:26.204464 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:16:26.204482 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:16:26.204500 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:16:26.204518 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:16:26.204536 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:16:26.204556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:16:26.206759 kernel: random: crng init done Jan 17 00:16:26.206799 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:16:26.206819 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:16:26.206838 kernel: Fallback order for Node 0: 0 Jan 17 00:16:26.206856 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:16:26.206874 kernel: Policy zone: Normal Jan 17 00:16:26.206892 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:16:26.206910 kernel: software IO TLB: area num 2. Jan 17 00:16:26.206930 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347140K reserved, 0K cma-reserved) Jan 17 00:16:26.206962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:16:26.206980 kernel: Kernel/User page tables isolation: enabled Jan 17 00:16:26.206998 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:16:26.207016 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:16:26.207034 kernel: Dynamic Preempt: voluntary Jan 17 00:16:26.207051 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:16:26.207071 kernel: rcu: RCU event tracing is enabled. Jan 17 00:16:26.207091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:16:26.207128 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:16:26.207147 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:16:26.207165 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:16:26.207188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:16:26.207206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:16:26.207224 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:16:26.207243 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:16:26.207263 kernel: Console: colour dummy device 80x25 Jan 17 00:16:26.207295 kernel: printk: console [ttyS0] enabled Jan 17 00:16:26.207312 kernel: ACPI: Core revision 20230628 Jan 17 00:16:26.207331 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:16:26.207347 kernel: x2apic enabled Jan 17 00:16:26.207365 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:16:26.207382 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:16:26.207416 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:16:26.207433 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:16:26.207449 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:16:26.207470 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:16:26.207489 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:16:26.207509 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:16:26.207527 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:16:26.207548 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:16:26.207565 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:16:26.207598 kernel: RETBleed: Mitigation: IBRS Jan 17 00:16:26.207617 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:16:26.207632 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:16:26.207656 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:16:26.207673 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:16:26.207692 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:16:26.207713 kernel: active return thunk: its_return_thunk Jan 17 00:16:26.207734 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:16:26.207754 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:16:26.207775 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:16:26.207795 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:16:26.207815 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:16:26.207841 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:16:26.207861 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:16:26.207881 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:16:26.207901 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:16:26.207922 kernel: landlock: Up and running. Jan 17 00:16:26.207943 kernel: SELinux: Initializing. Jan 17 00:16:26.207964 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.207984 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.208002 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:16:26.208027 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:16:26.208049 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:16:26.208070 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:16:26.208090 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:16:26.208111 kernel: signal: max sigframe size: 1776 Jan 17 00:16:26.208131 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:16:26.208153 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:16:26.208173 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:16:26.208194 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:16:26.208219 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:16:26.208240 kernel: .... node #0, CPUs: #1 Jan 17 00:16:26.208262 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:16:26.208294 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:16:26.208312 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:16:26.208334 kernel: smpboot: Max logical packages: 1 Jan 17 00:16:26.208354 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:16:26.208374 kernel: devtmpfs: initialized Jan 17 00:16:26.208400 kernel: x86/mm: Memory block size: 128MB Jan 17 00:16:26.208421 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:16:26.208442 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:16:26.208464 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:16:26.208484 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:16:26.208504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:16:26.208523 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:16:26.208544 kernel: audit: type=2000 audit(1768608984.591:1): state=initialized audit_enabled=0 res=1 Jan 17 00:16:26.208564 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:16:26.209834 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:16:26.209869 kernel: cpuidle: using governor menu Jan 17 00:16:26.209890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:16:26.209912 kernel: dca service started, version 1.12.1 Jan 17 00:16:26.209932 kernel: PCI: Using configuration type 1 for base access Jan 17 00:16:26.209953 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:16:26.209972 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:16:26.209992 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:16:26.210012 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:16:26.210042 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:16:26.210061 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:16:26.210081 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:16:26.210103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:16:26.210123 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:16:26.210144 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:16:26.210164 kernel: ACPI: Interpreter enabled Jan 17 00:16:26.210184 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:16:26.210205 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:16:26.210230 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:16:26.210250 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:16:26.210278 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:16:26.210298 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:16:26.210679 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:16:26.210904 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:16:26.211098 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:16:26.211132 kernel: PCI host bridge to bus 0000:00 Jan 17 00:16:26.211362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:16:26.211549 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:16:26.212855 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:16:26.213059 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:16:26.213239 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:16:26.213486 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:16:26.213770 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:16:26.213991 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:16:26.214193 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:16:26.214419 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:16:26.214797 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:16:26.215017 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:16:26.215250 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:16:26.215481 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:16:26.215780 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:16:26.215999 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:16:26.216199 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:16:26.216721 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:16:26.216762 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:16:26.216792 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:16:26.216808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:16:26.216825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:16:26.216840 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:16:26.216856 kernel: iommu: Default domain type: Translated Jan 17 00:16:26.216885 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:16:26.216901 kernel: efivars: Registered efivars operations Jan 17 00:16:26.216919 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:16:26.216936 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:16:26.216961 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:16:26.216978 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:16:26.216994 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:16:26.217014 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:16:26.217032 kernel: vgaarb: loaded Jan 17 00:16:26.217052 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:16:26.217071 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:16:26.217091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:16:26.217112 kernel: pnp: PnP ACPI init Jan 17 00:16:26.217135 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:16:26.217155 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:16:26.217174 kernel: NET: Registered PF_INET protocol family Jan 17 00:16:26.217194 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:16:26.217214 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:16:26.217235 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:16:26.217268 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:16:26.217290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:16:26.217310 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:16:26.217334 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.217355 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.217375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:16:26.217395 kernel: NET: Registered PF_XDP protocol family Jan 17 00:16:26.218628 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:16:26.218857 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:16:26.219051 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:16:26.219228 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:16:26.219602 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:16:26.219634 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:16:26.219654 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:16:26.219674 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:16:26.219695 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:16:26.219716 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:16:26.219735 kernel: clocksource: Switched to clocksource tsc Jan 17 00:16:26.219754 kernel: Initialise system trusted keyrings Jan 17 00:16:26.219782 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:16:26.219801 kernel: Key type asymmetric registered Jan 17 00:16:26.219820 kernel: Asymmetric key parser 'x509' registered Jan 17 00:16:26.219840 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:16:26.219861 kernel: io scheduler mq-deadline registered Jan 17 00:16:26.219880 kernel: io scheduler kyber registered Jan 17 00:16:26.219899 kernel: io scheduler bfq registered Jan 17 00:16:26.219919 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:16:26.219941 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:16:26.220207 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:16:26.220238 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:16:26.220484 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:16:26.220517 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:16:26.220770 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:16:26.220803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:16:26.220826 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:16:26.220847 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:16:26.220869 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:16:26.220898 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:16:26.221137 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:16:26.221203 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:16:26.221224 kernel: i8042: Warning: Keylock active Jan 17 00:16:26.221252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:16:26.221272 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:16:26.221490 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:16:26.221749 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:16:26.221944 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:16:25 UTC (1768608985) Jan 17 00:16:26.222137 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:16:26.222166 kernel: intel_pstate: CPU model not supported Jan 17 00:16:26.222186 kernel: pstore: Using crash dump compression: deflate Jan 17 00:16:26.222206 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:16:26.222225 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:16:26.222260 kernel: Segment Routing with IPv6 Jan 17 00:16:26.222297 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:16:26.222326 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:16:26.222346 kernel: Key type dns_resolver registered Jan 17 00:16:26.222364 kernel: IPI shorthand broadcast: enabled Jan 17 00:16:26.222384 kernel: sched_clock: Marking stable (1028005095, 184429719)->(1267630016, -55195202) Jan 17 00:16:26.222402 kernel: registered taskstats version 1 Jan 17 00:16:26.222418 kernel: Loading compiled-in X.509 certificates Jan 17 00:16:26.222436 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:16:26.222454 kernel: Key type .fscrypt registered Jan 17 00:16:26.222470 kernel: Key type fscrypt-provisioning registered Jan 17 00:16:26.222494 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:16:26.222512 kernel: ima: No architecture policies found Jan 17 00:16:26.222532 kernel: clk: Disabling unused clocks Jan 17 00:16:26.222553 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:16:26.222572 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 17 00:16:26.222590 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:16:26.222629 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:16:26.222646 kernel: Run /init as init process Jan 17 00:16:26.222660 kernel: with arguments: Jan 17 00:16:26.222683 kernel: /init Jan 17 00:16:26.222698 kernel: with environment: Jan 17 00:16:26.222714 kernel: HOME=/ Jan 17 00:16:26.222730 kernel: TERM=linux Jan 17 00:16:26.222752 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:16:26.222771 systemd[1]: Detected virtualization google. Jan 17 00:16:26.222789 systemd[1]: Detected architecture x86-64. Jan 17 00:16:26.222812 systemd[1]: Running in initrd. Jan 17 00:16:26.222831 systemd[1]: No hostname configured, using default hostname. Jan 17 00:16:26.222849 systemd[1]: Hostname set to . Jan 17 00:16:26.223023 systemd[1]: Initializing machine ID from random generator. Jan 17 00:16:26.223062 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:16:26.223081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:16:26.223101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:16:26.223124 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:16:26.223159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:16:26.223182 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:16:26.223199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:16:26.223222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:16:26.223242 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:16:26.223262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:16:26.223283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:16:26.223410 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:16:26.223431 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:16:26.223479 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:16:26.223520 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:16:26.223542 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:16:26.223562 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:16:26.223641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:16:26.223663 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:16:26.223685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:16:26.223707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:16:26.223728 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:16:26.223748 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:16:26.223770 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:16:26.223793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:16:26.223813 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:16:26.223844 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:16:26.223867 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:16:26.223889 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:16:26.223911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:26.224031 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:16:26.224090 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:16:26.224113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:16:26.224147 systemd-journald[184]: Journal started Jan 17 00:16:26.224194 systemd-journald[184]: Runtime Journal (/run/log/journal/b83b037fa81947ee9d983203186a24cc) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:16:26.203170 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:16:26.230887 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:16:26.239701 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:16:26.258613 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:16:26.262032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:16:26.267481 kernel: Bridge firewalling registered Jan 17 00:16:26.263881 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:16:26.265890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:16:26.271343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:16:26.285502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:26.296469 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:16:26.309120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:16:26.320940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:16:26.330977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:16:26.342725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:16:26.366037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:16:26.366798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:26.380287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:16:26.400037 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:16:26.438028 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:16:26.457878 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:16:26.466082 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:16:26.513946 systemd-resolved[217]: Positive Trust Anchors: Jan 17 00:16:26.513962 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:16:26.514036 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:16:26.620077 kernel: SCSI subsystem initialized Jan 17 00:16:26.620149 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:16:26.520905 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 17 00:16:26.632818 kernel: iscsi: registered transport (tcp) Jan 17 00:16:26.524633 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:16:26.545986 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:16:26.670189 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:16:26.670310 kernel: QLogic iSCSI HBA Driver Jan 17 00:16:26.738941 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:16:26.743969 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:16:26.828071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:16:26.828184 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:16:26.828212 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:16:26.891660 kernel: raid6: avx2x4 gen() 17638 MB/s Jan 17 00:16:26.912676 kernel: raid6: avx2x2 gen() 17247 MB/s Jan 17 00:16:26.938768 kernel: raid6: avx2x1 gen() 13196 MB/s Jan 17 00:16:26.938868 kernel: raid6: using algorithm avx2x4 gen() 17638 MB/s Jan 17 00:16:26.965866 kernel: raid6: .... xor() 6354 MB/s, rmw enabled Jan 17 00:16:26.965974 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:16:26.997642 kernel: xor: automatically using best checksumming function avx Jan 17 00:16:27.189658 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:16:27.208695 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:16:27.214060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:16:27.266138 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 17 00:16:27.274302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:16:27.292956 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:16:27.347753 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 17 00:16:27.395555 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:16:27.420927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:16:27.534057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:16:27.562891 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:16:27.618103 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:16:27.639407 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:16:27.662848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:16:27.758772 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:16:27.697760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:16:27.773991 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:16:27.852939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:16:27.919197 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:16:27.919718 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:16:27.919758 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:16:27.919789 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:16:27.919845 kernel: AES CTR mode by8 optimization enabled Jan 17 00:16:27.853214 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:27.907743 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:16:28.012507 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:16:28.013064 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:16:28.013338 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:16:28.013980 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:16:28.014463 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:16:28.014847 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:16:28.014877 kernel: GPT:17805311 != 33554431 Jan 17 00:16:27.930759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:16:28.051826 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:16:28.051873 kernel: GPT:17805311 != 33554431 Jan 17 00:16:28.051899 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:16:28.051925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:28.051961 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:16:27.931059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:27.944005 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:27.999143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:28.091175 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:16:28.121660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:28.166842 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (453) Jan 17 00:16:28.166891 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Jan 17 00:16:28.171305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:16:28.220325 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:16:28.239107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:16:28.256874 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:16:28.289669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:16:28.310913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:16:28.334955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:16:28.357403 disk-uuid[540]: Primary Header is updated. Jan 17 00:16:28.357403 disk-uuid[540]: Secondary Entries is updated. Jan 17 00:16:28.357403 disk-uuid[540]: Secondary Header is updated. Jan 17 00:16:28.383949 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:28.402652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:28.427516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:28.455847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:29.423915 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:29.424021 disk-uuid[541]: The operation has completed successfully. Jan 17 00:16:29.524909 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:16:29.525110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:16:29.564959 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:16:29.599753 sh[566]: Success Jan 17 00:16:29.624625 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:16:29.709993 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:16:29.718888 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:16:29.760434 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:16:29.799207 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:16:29.799343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:29.799371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:16:29.809162 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:16:29.816218 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:16:29.854634 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:16:29.863481 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:16:29.864783 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:16:29.870975 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:16:29.930272 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:29.930372 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:29.930401 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:16:29.937457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:16:29.984237 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:16:29.984296 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:16:29.984322 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:29.974635 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:16:29.995301 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:16:30.021981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:16:30.128461 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:16:30.169136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:16:30.246364 ignition[671]: Ignition 2.19.0 Jan 17 00:16:30.246386 ignition[671]: Stage: fetch-offline Jan 17 00:16:30.250799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:16:30.246464 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.258144 systemd-networkd[750]: lo: Link UP Jan 17 00:16:30.246481 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.258154 systemd-networkd[750]: lo: Gained carrier Jan 17 00:16:30.246715 ignition[671]: parsed url from cmdline: "" Jan 17 00:16:30.260518 systemd-networkd[750]: Enumeration completed Jan 17 00:16:30.246723 ignition[671]: no config URL provided Jan 17 00:16:30.261820 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:16:30.246733 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:16:30.261831 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:16:30.246750 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:16:30.264465 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:16:30.246764 ignition[671]: failed to fetch config: resource requires networking Jan 17 00:16:30.264960 systemd[1]: Reached target network.target - Network. Jan 17 00:16:30.247128 ignition[671]: Ignition finished successfully Jan 17 00:16:30.264966 systemd-networkd[750]: eth0: Link UP Jan 17 00:16:30.344492 ignition[760]: Ignition 2.19.0 Jan 17 00:16:30.264973 systemd-networkd[750]: eth0: Gained carrier Jan 17 00:16:30.344502 ignition[760]: Stage: fetch Jan 17 00:16:30.264998 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:16:30.345519 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.282764 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:16:30.345548 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.282785 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:16:30.345731 ignition[760]: parsed url from cmdline: "" Jan 17 00:16:30.297945 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:16:30.345736 ignition[760]: no config URL provided Jan 17 00:16:30.359257 unknown[760]: fetched base config from "system" Jan 17 00:16:30.345744 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:16:30.359271 unknown[760]: fetched base config from "system" Jan 17 00:16:30.345756 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:16:30.359282 unknown[760]: fetched user config from "gcp" Jan 17 00:16:30.345799 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:16:30.364914 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:16:30.351993 ignition[760]: GET result: OK Jan 17 00:16:30.405108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:16:30.352104 ignition[760]: parsing config with SHA512: 6bae2c076d539566b06faa81cfa9ba4676207e7cbbce07c730c2780eeaa5fdaaeda16208df1f62a263b9b900147925e57ad722a5d0057d199e00326b28c29458 Jan 17 00:16:30.486732 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:16:30.360816 ignition[760]: fetch: fetch complete Jan 17 00:16:30.495958 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:16:30.360838 ignition[760]: fetch: fetch passed Jan 17 00:16:30.565516 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:16:30.360947 ignition[760]: Ignition finished successfully Jan 17 00:16:30.570314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:16:30.482804 ignition[766]: Ignition 2.19.0 Jan 17 00:16:30.602907 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:16:30.482815 ignition[766]: Stage: kargs Jan 17 00:16:30.631100 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:16:30.483062 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.656264 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:16:30.483075 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.681144 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:16:30.484196 ignition[766]: kargs: kargs passed Jan 17 00:16:30.694938 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:16:30.484290 ignition[766]: Ignition finished successfully Jan 17 00:16:30.561232 ignition[773]: Ignition 2.19.0 Jan 17 00:16:30.561254 ignition[773]: Stage: disks Jan 17 00:16:30.561673 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.561688 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.563713 ignition[773]: disks: disks passed Jan 17 00:16:30.563963 ignition[773]: Ignition finished successfully Jan 17 00:16:30.770057 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:16:30.910974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:16:30.915982 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:16:31.087936 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:16:31.088963 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:16:31.104720 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:16:31.123818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:16:31.141961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:16:31.163322 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:16:31.163435 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:16:31.256213 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Jan 17 00:16:31.256308 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:31.256329 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:31.256346 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:16:31.256362 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:16:31.256377 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:16:31.163486 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:16:31.232432 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:16:31.267019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:16:31.290983 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:16:31.427021 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 00:16:31.465616 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:16:31.480369 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:16:31.491794 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:16:31.501755 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:16:31.684189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:16:31.701057 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:16:31.722023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:16:31.745673 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:31.753350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:16:31.801037 ignition[901]: INFO : Ignition 2.19.0 Jan 17 00:16:31.801037 ignition[901]: INFO : Stage: mount Jan 17 00:16:31.810156 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:31.810156 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:31.810156 ignition[901]: INFO : mount: mount passed Jan 17 00:16:31.810156 ignition[901]: INFO : Ignition finished successfully Jan 17 00:16:31.801555 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:16:31.825526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:16:31.861316 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:16:32.097095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:16:32.150620 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Jan 17 00:16:32.160654 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:32.160775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:32.174456 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:16:32.194179 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:16:32.194319 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:16:32.199209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:16:32.245925 ignition[930]: INFO : Ignition 2.19.0 Jan 17 00:16:32.245925 ignition[930]: INFO : Stage: files Jan 17 00:16:32.260861 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:32.260861 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:32.260861 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:16:32.260861 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:16:32.260861 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:16:32.317787 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:16:32.265106 unknown[930]: wrote ssh authorized keys file for user: core Jan 17 00:16:32.398928 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:16:32.596134 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:16:33.121240 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:16:33.909117 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:33.909117 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:16:33.929125 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:16:33.929125 ignition[930]: INFO : files: files passed Jan 17 00:16:33.929125 ignition[930]: INFO : Ignition finished successfully Jan 17 00:16:33.914963 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:16:33.956964 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:16:33.991940 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:16:34.005544 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:16:34.172892 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:16:34.172892 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:16:34.005761 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:16:34.212914 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:16:34.103807 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:16:34.119098 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:16:34.146975 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:16:34.250147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:16:34.250305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:16:34.264041 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:16:34.286077 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:16:34.305179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:16:34.311969 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:16:34.376654 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:16:34.403938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:16:34.452062 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:16:34.464170 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:16:34.495101 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:16:34.495686 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:16:34.495934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:16:34.542183 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:16:34.542688 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:16:34.577068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:16:34.577550 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:16:34.616042 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:16:34.616564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:16:34.654087 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:16:34.654568 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:16:34.693058 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:16:34.710990 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:16:34.711378 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:16:34.711691 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:16:34.754174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:16:34.765072 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:16:34.787109 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:16:34.787346 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:16:34.811108 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:16:34.811354 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:16:34.843183 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:16:34.843438 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:16:34.865184 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:16:34.865393 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:16:34.892050 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:16:34.945819 ignition[983]: INFO : Ignition 2.19.0 Jan 17 00:16:34.945819 ignition[983]: INFO : Stage: umount Jan 17 00:16:34.945819 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:34.945819 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:34.945819 ignition[983]: INFO : umount: umount passed Jan 17 00:16:34.945819 ignition[983]: INFO : Ignition finished successfully Jan 17 00:16:34.920021 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:16:34.955808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:16:34.956126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:16:34.968350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:16:34.968778 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:16:35.008080 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:16:35.009149 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:16:35.009291 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:16:35.026769 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:16:35.026909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:16:35.044460 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:16:35.044717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:16:35.071100 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:16:35.071197 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:16:35.091139 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:16:35.091241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:16:35.111096 systemd[1]: Stopped target network.target - Network. Jan 17 00:16:35.138872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:16:35.139131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:16:35.148177 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:16:35.182887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:16:35.183154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:16:35.210883 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:16:35.220113 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:16:35.253097 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:16:35.253206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:16:35.263233 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:16:35.263327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:16:35.298071 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:16:35.298171 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:16:35.325072 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:16:35.325164 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:16:35.353103 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:16:35.353199 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:16:35.373345 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:16:35.377848 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 00:16:35.402272 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:16:35.427853 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:16:35.428014 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:16:35.438924 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:16:35.439148 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:16:35.454721 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:16:35.454851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:16:35.495517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:16:35.495787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:16:35.503883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:16:35.535831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:16:35.536114 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:16:35.579990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:16:35.580106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:16:35.597994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:16:35.598111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:16:35.618999 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:16:35.619128 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:16:35.640208 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:16:35.667309 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:16:35.667567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:16:35.672910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:16:35.673000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:16:35.700979 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:16:35.701063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:16:35.731057 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:16:35.731156 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:16:35.760097 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:16:35.760351 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:16:35.798898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:16:35.799046 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:35.836940 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:16:35.869816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:16:35.869963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:16:36.136848 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:16:35.891982 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:16:35.892115 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:16:35.915962 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:16:35.916079 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:16:35.936958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:16:35.937084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:35.961687 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:16:35.961864 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:16:35.989465 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:16:35.989638 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:16:36.019748 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:16:36.033945 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:16:36.083177 systemd[1]: Switching root. Jan 17 00:16:36.280838 systemd-journald[184]: Journal stopped Jan 17 00:16:26.199215 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:16:26.199287 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:16:26.199307 kernel: BIOS-provided physical RAM map: Jan 17 00:16:26.199322 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:16:26.199337 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:16:26.199351 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:16:26.199369 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:16:26.199389 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:16:26.199423 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:16:26.199438 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:16:26.199462 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:16:26.199478 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:16:26.199493 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:16:26.199508 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:16:26.199535 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:16:26.199552 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:16:26.199569 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:16:26.202795 kernel: NX (Execute Disable) protection: active Jan 17 00:16:26.202843 kernel: APIC: Static calls initialized Jan 17 00:16:26.202858 kernel: efi: EFI v2.7 by EDK II Jan 17 00:16:26.202872 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:16:26.202888 kernel: SMBIOS 2.4 present. Jan 17 00:16:26.202905 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:16:26.202922 kernel: Hypervisor detected: KVM Jan 17 00:16:26.202953 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:16:26.202968 kernel: kvm-clock: using sched offset of 13578770291 cycles Jan 17 00:16:26.202984 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:16:26.202999 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:16:26.203014 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:16:26.203032 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:16:26.203047 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:16:26.203062 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:16:26.203076 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:16:26.203111 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:16:26.203128 kernel: Using GB pages for direct mapping Jan 17 00:16:26.203142 kernel: Secure boot disabled Jan 17 00:16:26.203155 kernel: ACPI: Early table checksum verification disabled Jan 17 00:16:26.203170 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:16:26.203185 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:16:26.203200 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:16:26.203225 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:16:26.203245 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:16:26.203261 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:16:26.203276 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:16:26.203291 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:16:26.203306 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:16:26.203321 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:16:26.203342 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:16:26.203358 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:16:26.203376 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:16:26.203394 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:16:26.203410 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:16:26.203425 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:16:26.203442 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:16:26.203472 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:16:26.203490 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:16:26.203716 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:16:26.203752 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:16:26.203772 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:16:26.203792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:16:26.203806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:16:26.203823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:16:26.203843 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:16:26.203862 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:16:26.203881 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 17 00:16:26.203916 kernel: Zone ranges: Jan 17 00:16:26.203934 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:16:26.203954 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:16:26.203972 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:16:26.203991 kernel: Movable zone start for each node Jan 17 00:16:26.204009 kernel: Early memory node ranges Jan 17 00:16:26.204028 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:16:26.204047 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:16:26.204066 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:16:26.204088 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:16:26.204104 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:16:26.204118 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:16:26.204135 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:16:26.204151 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:16:26.204167 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:16:26.204185 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:16:26.204203 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:16:26.204221 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:16:26.204244 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:16:26.204262 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:16:26.204280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:16:26.204298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:16:26.204315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:16:26.204333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:16:26.204351 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:16:26.204368 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:16:26.204384 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:16:26.204403 kernel: Booting paravirtualized kernel on KVM Jan 17 00:16:26.204421 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:16:26.204439 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:16:26.204464 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:16:26.204482 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:16:26.204500 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:16:26.204518 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:16:26.204536 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:16:26.204556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:16:26.206759 kernel: random: crng init done Jan 17 00:16:26.206799 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:16:26.206819 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:16:26.206838 kernel: Fallback order for Node 0: 0 Jan 17 00:16:26.206856 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:16:26.206874 kernel: Policy zone: Normal Jan 17 00:16:26.206892 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:16:26.206910 kernel: software IO TLB: area num 2. Jan 17 00:16:26.206930 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347140K reserved, 0K cma-reserved) Jan 17 00:16:26.206962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:16:26.206980 kernel: Kernel/User page tables isolation: enabled Jan 17 00:16:26.206998 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:16:26.207016 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:16:26.207034 kernel: Dynamic Preempt: voluntary Jan 17 00:16:26.207051 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:16:26.207071 kernel: rcu: RCU event tracing is enabled. Jan 17 00:16:26.207091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:16:26.207128 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:16:26.207147 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:16:26.207165 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:16:26.207188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:16:26.207206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:16:26.207224 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:16:26.207243 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:16:26.207263 kernel: Console: colour dummy device 80x25 Jan 17 00:16:26.207295 kernel: printk: console [ttyS0] enabled Jan 17 00:16:26.207312 kernel: ACPI: Core revision 20230628 Jan 17 00:16:26.207331 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:16:26.207347 kernel: x2apic enabled Jan 17 00:16:26.207365 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:16:26.207382 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:16:26.207416 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:16:26.207433 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:16:26.207449 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:16:26.207470 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:16:26.207489 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:16:26.207509 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:16:26.207527 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:16:26.207548 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:16:26.207565 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:16:26.207598 kernel: RETBleed: Mitigation: IBRS Jan 17 00:16:26.207617 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:16:26.207632 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:16:26.207656 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:16:26.207673 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:16:26.207692 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:16:26.207713 kernel: active return thunk: its_return_thunk Jan 17 00:16:26.207734 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:16:26.207754 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:16:26.207775 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:16:26.207795 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:16:26.207815 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:16:26.207841 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:16:26.207861 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:16:26.207881 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:16:26.207901 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:16:26.207922 kernel: landlock: Up and running. Jan 17 00:16:26.207943 kernel: SELinux: Initializing. Jan 17 00:16:26.207964 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.207984 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.208002 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:16:26.208027 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:16:26.208049 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:16:26.208070 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:16:26.208090 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:16:26.208111 kernel: signal: max sigframe size: 1776 Jan 17 00:16:26.208131 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:16:26.208153 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:16:26.208173 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:16:26.208194 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:16:26.208219 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:16:26.208240 kernel: .... node #0, CPUs: #1 Jan 17 00:16:26.208262 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:16:26.208294 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:16:26.208312 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:16:26.208334 kernel: smpboot: Max logical packages: 1 Jan 17 00:16:26.208354 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:16:26.208374 kernel: devtmpfs: initialized Jan 17 00:16:26.208400 kernel: x86/mm: Memory block size: 128MB Jan 17 00:16:26.208421 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:16:26.208442 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:16:26.208464 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:16:26.208484 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:16:26.208504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:16:26.208523 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:16:26.208544 kernel: audit: type=2000 audit(1768608984.591:1): state=initialized audit_enabled=0 res=1 Jan 17 00:16:26.208564 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:16:26.209834 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:16:26.209869 kernel: cpuidle: using governor menu Jan 17 00:16:26.209890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:16:26.209912 kernel: dca service started, version 1.12.1 Jan 17 00:16:26.209932 kernel: PCI: Using configuration type 1 for base access Jan 17 00:16:26.209953 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:16:26.209972 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:16:26.209992 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:16:26.210012 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:16:26.210042 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:16:26.210061 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:16:26.210081 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:16:26.210103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:16:26.210123 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:16:26.210144 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:16:26.210164 kernel: ACPI: Interpreter enabled Jan 17 00:16:26.210184 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:16:26.210205 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:16:26.210230 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:16:26.210250 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:16:26.210278 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:16:26.210298 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:16:26.210679 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:16:26.210904 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:16:26.211098 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:16:26.211132 kernel: PCI host bridge to bus 0000:00 Jan 17 00:16:26.211362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:16:26.211549 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:16:26.212855 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:16:26.213059 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:16:26.213239 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:16:26.213486 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:16:26.213770 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:16:26.213991 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:16:26.214193 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:16:26.214419 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:16:26.214797 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:16:26.215017 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:16:26.215250 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:16:26.215481 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:16:26.215780 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:16:26.215999 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:16:26.216199 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:16:26.216721 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:16:26.216762 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:16:26.216792 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:16:26.216808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:16:26.216825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:16:26.216840 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:16:26.216856 kernel: iommu: Default domain type: Translated Jan 17 00:16:26.216885 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:16:26.216901 kernel: efivars: Registered efivars operations Jan 17 00:16:26.216919 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:16:26.216936 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:16:26.216961 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:16:26.216978 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:16:26.216994 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:16:26.217014 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:16:26.217032 kernel: vgaarb: loaded Jan 17 00:16:26.217052 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:16:26.217071 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:16:26.217091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:16:26.217112 kernel: pnp: PnP ACPI init Jan 17 00:16:26.217135 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:16:26.217155 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:16:26.217174 kernel: NET: Registered PF_INET protocol family Jan 17 00:16:26.217194 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:16:26.217214 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:16:26.217235 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:16:26.217268 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:16:26.217290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:16:26.217310 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:16:26.217334 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.217355 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:16:26.217375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:16:26.217395 kernel: NET: Registered PF_XDP protocol family Jan 17 00:16:26.218628 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:16:26.218857 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:16:26.219051 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:16:26.219228 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:16:26.219602 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:16:26.219634 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:16:26.219654 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:16:26.219674 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:16:26.219695 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:16:26.219716 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:16:26.219735 kernel: clocksource: Switched to clocksource tsc Jan 17 00:16:26.219754 kernel: Initialise system trusted keyrings Jan 17 00:16:26.219782 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:16:26.219801 kernel: Key type asymmetric registered Jan 17 00:16:26.219820 kernel: Asymmetric key parser 'x509' registered Jan 17 00:16:26.219840 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:16:26.219861 kernel: io scheduler mq-deadline registered Jan 17 00:16:26.219880 kernel: io scheduler kyber registered Jan 17 00:16:26.219899 kernel: io scheduler bfq registered Jan 17 00:16:26.219919 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:16:26.219941 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:16:26.220207 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:16:26.220238 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:16:26.220484 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:16:26.220517 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:16:26.220770 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:16:26.220803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:16:26.220826 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:16:26.220847 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:16:26.220869 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:16:26.220898 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:16:26.221137 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:16:26.221203 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:16:26.221224 kernel: i8042: Warning: Keylock active Jan 17 00:16:26.221252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:16:26.221272 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:16:26.221490 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:16:26.221749 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:16:26.221944 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:16:25 UTC (1768608985) Jan 17 00:16:26.222137 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:16:26.222166 kernel: intel_pstate: CPU model not supported Jan 17 00:16:26.222186 kernel: pstore: Using crash dump compression: deflate Jan 17 00:16:26.222206 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:16:26.222225 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:16:26.222260 kernel: Segment Routing with IPv6 Jan 17 00:16:26.222297 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:16:26.222326 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:16:26.222346 kernel: Key type dns_resolver registered Jan 17 00:16:26.222364 kernel: IPI shorthand broadcast: enabled Jan 17 00:16:26.222384 kernel: sched_clock: Marking stable (1028005095, 184429719)->(1267630016, -55195202) Jan 17 00:16:26.222402 kernel: registered taskstats version 1 Jan 17 00:16:26.222418 kernel: Loading compiled-in X.509 certificates Jan 17 00:16:26.222436 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:16:26.222454 kernel: Key type .fscrypt registered Jan 17 00:16:26.222470 kernel: Key type fscrypt-provisioning registered Jan 17 00:16:26.222494 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:16:26.222512 kernel: ima: No architecture policies found Jan 17 00:16:26.222532 kernel: clk: Disabling unused clocks Jan 17 00:16:26.222553 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:16:26.222572 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 17 00:16:26.222590 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:16:26.222629 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:16:26.222646 kernel: Run /init as init process Jan 17 00:16:26.222660 kernel: with arguments: Jan 17 00:16:26.222683 kernel: /init Jan 17 00:16:26.222698 kernel: with environment: Jan 17 00:16:26.222714 kernel: HOME=/ Jan 17 00:16:26.222730 kernel: TERM=linux Jan 17 00:16:26.222752 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:16:26.222771 systemd[1]: Detected virtualization google. Jan 17 00:16:26.222789 systemd[1]: Detected architecture x86-64. Jan 17 00:16:26.222812 systemd[1]: Running in initrd. Jan 17 00:16:26.222831 systemd[1]: No hostname configured, using default hostname. Jan 17 00:16:26.222849 systemd[1]: Hostname set to . Jan 17 00:16:26.223023 systemd[1]: Initializing machine ID from random generator. Jan 17 00:16:26.223062 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:16:26.223081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:16:26.223101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:16:26.223124 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:16:26.223159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:16:26.223182 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:16:26.223199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:16:26.223222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:16:26.223242 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:16:26.223262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:16:26.223283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:16:26.223410 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:16:26.223431 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:16:26.223479 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:16:26.223520 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:16:26.223542 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:16:26.223562 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:16:26.223641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:16:26.223663 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:16:26.223685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:16:26.223707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:16:26.223728 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:16:26.223748 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:16:26.223770 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:16:26.223793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:16:26.223813 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:16:26.223844 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:16:26.223867 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:16:26.223889 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:16:26.223911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:26.224031 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:16:26.224090 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:16:26.224113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:16:26.224147 systemd-journald[184]: Journal started Jan 17 00:16:26.224194 systemd-journald[184]: Runtime Journal (/run/log/journal/b83b037fa81947ee9d983203186a24cc) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:16:26.203170 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:16:26.230887 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:16:26.239701 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:16:26.258613 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:16:26.262032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:16:26.267481 kernel: Bridge firewalling registered Jan 17 00:16:26.263881 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:16:26.265890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:16:26.271343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:16:26.285502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:26.296469 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:16:26.309120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:16:26.320940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:16:26.330977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:16:26.342725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:16:26.366037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:16:26.366798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:26.380287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:16:26.400037 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:16:26.438028 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:16:26.457878 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:16:26.466082 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:16:26.513946 systemd-resolved[217]: Positive Trust Anchors: Jan 17 00:16:26.513962 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:16:26.514036 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:16:26.620077 kernel: SCSI subsystem initialized Jan 17 00:16:26.620149 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:16:26.520905 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 17 00:16:26.632818 kernel: iscsi: registered transport (tcp) Jan 17 00:16:26.524633 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:16:26.545986 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:16:26.670189 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:16:26.670310 kernel: QLogic iSCSI HBA Driver Jan 17 00:16:26.738941 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:16:26.743969 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:16:26.828071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:16:26.828184 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:16:26.828212 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:16:26.891660 kernel: raid6: avx2x4 gen() 17638 MB/s Jan 17 00:16:26.912676 kernel: raid6: avx2x2 gen() 17247 MB/s Jan 17 00:16:26.938768 kernel: raid6: avx2x1 gen() 13196 MB/s Jan 17 00:16:26.938868 kernel: raid6: using algorithm avx2x4 gen() 17638 MB/s Jan 17 00:16:26.965866 kernel: raid6: .... xor() 6354 MB/s, rmw enabled Jan 17 00:16:26.965974 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:16:26.997642 kernel: xor: automatically using best checksumming function avx Jan 17 00:16:27.189658 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:16:27.208695 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:16:27.214060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:16:27.266138 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 17 00:16:27.274302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:16:27.292956 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:16:27.347753 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 17 00:16:27.395555 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:16:27.420927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:16:27.534057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:16:27.562891 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:16:27.618103 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:16:27.639407 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:16:27.662848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:16:27.758772 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:16:27.697760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:16:27.773991 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:16:27.852939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:16:27.919197 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:16:27.919718 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:16:27.919758 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:16:27.919789 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:16:27.919845 kernel: AES CTR mode by8 optimization enabled Jan 17 00:16:27.853214 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:27.907743 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:16:28.012507 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:16:28.013064 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:16:28.013338 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:16:28.013980 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:16:28.014463 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:16:28.014847 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:16:28.014877 kernel: GPT:17805311 != 33554431 Jan 17 00:16:27.930759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:16:28.051826 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:16:28.051873 kernel: GPT:17805311 != 33554431 Jan 17 00:16:28.051899 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:16:28.051925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:28.051961 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:16:27.931059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:27.944005 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:27.999143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:28.091175 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:16:28.121660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:28.166842 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (453) Jan 17 00:16:28.166891 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Jan 17 00:16:28.171305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:16:28.220325 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:16:28.239107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:16:28.256874 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:16:28.289669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:16:28.310913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:16:28.334955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:16:28.357403 disk-uuid[540]: Primary Header is updated. Jan 17 00:16:28.357403 disk-uuid[540]: Secondary Entries is updated. Jan 17 00:16:28.357403 disk-uuid[540]: Secondary Header is updated. Jan 17 00:16:28.383949 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:28.402652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:28.427516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:28.455847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:29.423915 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:16:29.424021 disk-uuid[541]: The operation has completed successfully. Jan 17 00:16:29.524909 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:16:29.525110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:16:29.564959 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:16:29.599753 sh[566]: Success Jan 17 00:16:29.624625 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:16:29.709993 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:16:29.718888 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:16:29.760434 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:16:29.799207 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:16:29.799343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:29.799371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:16:29.809162 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:16:29.816218 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:16:29.854634 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:16:29.863481 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:16:29.864783 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:16:29.870975 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:16:29.930272 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:29.930372 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:29.930401 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:16:29.937457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:16:29.984237 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:16:29.984296 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:16:29.984322 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:29.974635 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:16:29.995301 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:16:30.021981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:16:30.128461 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:16:30.169136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:16:30.246364 ignition[671]: Ignition 2.19.0 Jan 17 00:16:30.246386 ignition[671]: Stage: fetch-offline Jan 17 00:16:30.250799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:16:30.246464 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.258144 systemd-networkd[750]: lo: Link UP Jan 17 00:16:30.246481 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.258154 systemd-networkd[750]: lo: Gained carrier Jan 17 00:16:30.246715 ignition[671]: parsed url from cmdline: "" Jan 17 00:16:30.260518 systemd-networkd[750]: Enumeration completed Jan 17 00:16:30.246723 ignition[671]: no config URL provided Jan 17 00:16:30.261820 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:16:30.246733 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:16:30.261831 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:16:30.246750 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:16:30.264465 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:16:30.246764 ignition[671]: failed to fetch config: resource requires networking Jan 17 00:16:30.264960 systemd[1]: Reached target network.target - Network. Jan 17 00:16:30.247128 ignition[671]: Ignition finished successfully Jan 17 00:16:30.264966 systemd-networkd[750]: eth0: Link UP Jan 17 00:16:30.344492 ignition[760]: Ignition 2.19.0 Jan 17 00:16:30.264973 systemd-networkd[750]: eth0: Gained carrier Jan 17 00:16:30.344502 ignition[760]: Stage: fetch Jan 17 00:16:30.264998 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:16:30.345519 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.282764 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:16:30.345548 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.282785 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:16:30.345731 ignition[760]: parsed url from cmdline: "" Jan 17 00:16:30.297945 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:16:30.345736 ignition[760]: no config URL provided Jan 17 00:16:30.359257 unknown[760]: fetched base config from "system" Jan 17 00:16:30.345744 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:16:30.359271 unknown[760]: fetched base config from "system" Jan 17 00:16:30.345756 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:16:30.359282 unknown[760]: fetched user config from "gcp" Jan 17 00:16:30.345799 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:16:30.364914 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:16:30.351993 ignition[760]: GET result: OK Jan 17 00:16:30.405108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:16:30.352104 ignition[760]: parsing config with SHA512: 6bae2c076d539566b06faa81cfa9ba4676207e7cbbce07c730c2780eeaa5fdaaeda16208df1f62a263b9b900147925e57ad722a5d0057d199e00326b28c29458 Jan 17 00:16:30.486732 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:16:30.360816 ignition[760]: fetch: fetch complete Jan 17 00:16:30.495958 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:16:30.360838 ignition[760]: fetch: fetch passed Jan 17 00:16:30.565516 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:16:30.360947 ignition[760]: Ignition finished successfully Jan 17 00:16:30.570314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:16:30.482804 ignition[766]: Ignition 2.19.0 Jan 17 00:16:30.602907 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:16:30.482815 ignition[766]: Stage: kargs Jan 17 00:16:30.631100 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:16:30.483062 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.656264 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:16:30.483075 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.681144 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:16:30.484196 ignition[766]: kargs: kargs passed Jan 17 00:16:30.694938 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:16:30.484290 ignition[766]: Ignition finished successfully Jan 17 00:16:30.561232 ignition[773]: Ignition 2.19.0 Jan 17 00:16:30.561254 ignition[773]: Stage: disks Jan 17 00:16:30.561673 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:30.561688 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:30.563713 ignition[773]: disks: disks passed Jan 17 00:16:30.563963 ignition[773]: Ignition finished successfully Jan 17 00:16:30.770057 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:16:30.910974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:16:30.915982 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:16:31.087936 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:16:31.088963 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:16:31.104720 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:16:31.123818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:16:31.141961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:16:31.163322 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:16:31.163435 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:16:31.256213 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Jan 17 00:16:31.256308 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:31.256329 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:31.256346 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:16:31.256362 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:16:31.256377 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:16:31.163486 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:16:31.232432 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:16:31.267019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:16:31.290983 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:16:31.427021 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 00:16:31.465616 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:16:31.480369 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:16:31.491794 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:16:31.501755 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:16:31.684189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:16:31.701057 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:16:31.722023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:16:31.745673 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:31.753350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:16:31.801037 ignition[901]: INFO : Ignition 2.19.0 Jan 17 00:16:31.801037 ignition[901]: INFO : Stage: mount Jan 17 00:16:31.810156 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:31.810156 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:31.810156 ignition[901]: INFO : mount: mount passed Jan 17 00:16:31.810156 ignition[901]: INFO : Ignition finished successfully Jan 17 00:16:31.801555 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:16:31.825526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:16:31.861316 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:16:32.097095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:16:32.150620 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Jan 17 00:16:32.160654 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:16:32.160775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:16:32.174456 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:16:32.194179 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:16:32.194319 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:16:32.199209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:16:32.245925 ignition[930]: INFO : Ignition 2.19.0 Jan 17 00:16:32.245925 ignition[930]: INFO : Stage: files Jan 17 00:16:32.260861 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:32.260861 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:32.260861 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:16:32.260861 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:16:32.260861 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:16:32.317787 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:16:32.317787 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:16:32.265106 unknown[930]: wrote ssh authorized keys file for user: core Jan 17 00:16:32.398928 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:16:32.596134 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:32.613859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:16:33.121240 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:16:33.909117 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:16:33.909117 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:16:33.929125 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:16:33.929125 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:16:33.929125 ignition[930]: INFO : files: files passed Jan 17 00:16:33.929125 ignition[930]: INFO : Ignition finished successfully Jan 17 00:16:33.914963 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:16:33.956964 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:16:33.991940 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:16:34.005544 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:16:34.172892 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:16:34.172892 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:16:34.005761 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:16:34.212914 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:16:34.103807 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:16:34.119098 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:16:34.146975 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:16:34.250147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:16:34.250305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:16:34.264041 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:16:34.286077 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:16:34.305179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:16:34.311969 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:16:34.376654 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:16:34.403938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:16:34.452062 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:16:34.464170 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:16:34.495101 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:16:34.495686 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:16:34.495934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:16:34.542183 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:16:34.542688 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:16:34.577068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:16:34.577550 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:16:34.616042 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:16:34.616564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:16:34.654087 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:16:34.654568 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:16:34.693058 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:16:34.710990 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:16:34.711378 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:16:34.711691 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:16:34.754174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:16:34.765072 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:16:34.787109 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:16:34.787346 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:16:34.811108 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:16:34.811354 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:16:34.843183 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:16:34.843438 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:16:34.865184 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:16:34.865393 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:16:34.892050 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:16:34.945819 ignition[983]: INFO : Ignition 2.19.0 Jan 17 00:16:34.945819 ignition[983]: INFO : Stage: umount Jan 17 00:16:34.945819 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:16:34.945819 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:16:34.945819 ignition[983]: INFO : umount: umount passed Jan 17 00:16:34.945819 ignition[983]: INFO : Ignition finished successfully Jan 17 00:16:34.920021 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:16:34.955808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:16:34.956126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:16:34.968350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:16:34.968778 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:16:35.008080 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:16:35.009149 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:16:35.009291 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:16:35.026769 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:16:35.026909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:16:35.044460 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:16:35.044717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:16:35.071100 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:16:35.071197 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:16:35.091139 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:16:35.091241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:16:35.111096 systemd[1]: Stopped target network.target - Network. Jan 17 00:16:35.138872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:16:35.139131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:16:35.148177 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:16:35.182887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:16:35.183154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:16:35.210883 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:16:35.220113 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:16:35.253097 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:16:35.253206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:16:35.263233 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:16:35.263327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:16:35.298071 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:16:35.298171 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:16:35.325072 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:16:35.325164 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:16:35.353103 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:16:35.353199 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:16:35.373345 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:16:35.377848 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 00:16:35.402272 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:16:35.427853 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:16:35.428014 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:16:35.438924 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:16:35.439148 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:16:35.454721 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:16:35.454851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:16:35.495517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:16:35.495787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:16:35.503883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:16:35.535831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:16:35.536114 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:16:35.579990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:16:35.580106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:16:35.597994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:16:35.598111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:16:35.618999 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:16:35.619128 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:16:35.640208 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:16:35.667309 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:16:35.667567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:16:35.672910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:16:35.673000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:16:35.700979 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:16:35.701063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:16:35.731057 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:16:35.731156 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:16:35.760097 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:16:35.760351 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:16:35.798898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:16:35.799046 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:16:35.836940 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:16:35.869816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:16:35.869963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:16:36.136848 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:16:35.891982 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:16:35.892115 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:16:35.915962 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:16:35.916079 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:16:35.936958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:16:35.937084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:35.961687 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:16:35.961864 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:16:35.989465 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:16:35.989638 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:16:36.019748 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:16:36.033945 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:16:36.083177 systemd[1]: Switching root. Jan 17 00:16:36.280838 systemd-journald[184]: Journal stopped Jan 17 00:16:39.222016 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:16:39.222120 kernel: SELinux: policy capability open_perms=1 Jan 17 00:16:39.222144 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:16:39.222162 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:16:39.222177 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:16:39.222196 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:16:39.222216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:16:39.222240 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:16:39.222258 kernel: audit: type=1403 audit(1768608996.804:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:16:39.222282 systemd[1]: Successfully loaded SELinux policy in 96.199ms. Jan 17 00:16:39.222304 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.768ms. Jan 17 00:16:39.222324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:16:39.222345 systemd[1]: Detected virtualization google. Jan 17 00:16:39.222366 systemd[1]: Detected architecture x86-64. Jan 17 00:16:39.222393 systemd[1]: Detected first boot. Jan 17 00:16:39.222416 systemd[1]: Initializing machine ID from random generator. Jan 17 00:16:39.222437 zram_generator::config[1024]: No configuration found. Jan 17 00:16:39.222459 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:16:39.222479 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:16:39.222505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:16:39.222525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:16:39.222547 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:16:39.222568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:16:39.222616 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:16:39.222638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:16:39.222657 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:16:39.222698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:16:39.222719 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:16:39.222739 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:16:39.222758 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:16:39.222779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:16:39.222799 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:16:39.222818 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:16:39.222838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:16:39.222863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:16:39.222880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:16:39.222901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:16:39.222920 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:16:39.222939 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:16:39.222958 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:16:39.222986 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:16:39.223009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:16:39.223045 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:16:39.223071 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:16:39.223092 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:16:39.223112 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:16:39.223132 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:16:39.223154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:16:39.223179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:16:39.223202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:16:39.223232 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:16:39.223255 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:16:39.223278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:16:39.223302 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:16:39.223326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:16:39.223354 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:16:39.223378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:16:39.223405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:16:39.223430 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:16:39.223453 systemd[1]: Reached target machines.target - Containers. Jan 17 00:16:39.223476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:16:39.223500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:16:39.223523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:16:39.223552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:16:39.223576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:16:39.223646 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:16:39.223671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:16:39.223695 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:16:39.223719 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:16:39.223744 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:16:39.223768 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:16:39.223799 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:16:39.223823 kernel: fuse: init (API version 7.39) Jan 17 00:16:39.223844 kernel: ACPI: bus type drm_connector registered Jan 17 00:16:39.223866 kernel: loop: module loaded Jan 17 00:16:39.223888 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:16:39.223912 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:16:39.223937 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:16:39.223960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:16:39.223987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:16:39.224091 systemd-journald[1111]: Collecting audit messages is disabled. Jan 17 00:16:39.224151 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:16:39.224176 systemd-journald[1111]: Journal started Jan 17 00:16:39.224226 systemd-journald[1111]: Runtime Journal (/run/log/journal/12b53f9031a5441c9206224e35d5f0bb) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:16:37.908171 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:16:37.933818 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:16:37.934551 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:16:39.258655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:16:39.277730 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:16:39.277844 systemd[1]: Stopped verity-setup.service. Jan 17 00:16:39.307624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:16:39.319676 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:16:39.330603 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:16:39.341213 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:16:39.352223 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:16:39.363170 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:16:39.374178 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:16:39.386188 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:16:39.396465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:16:39.408395 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:16:39.420365 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:16:39.420650 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:16:39.432302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:16:39.432573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:16:39.445348 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:16:39.445672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:16:39.456262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:16:39.456528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:16:39.469361 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:16:39.469666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:16:39.480362 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:16:39.480677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:16:39.492392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:16:39.503480 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:16:39.516409 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:16:39.529326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:16:39.558307 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:16:39.574821 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:16:39.600748 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:16:39.611911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:16:39.612209 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:16:39.624948 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:16:39.647022 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:16:39.667929 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:16:39.678099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:16:39.688941 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:16:39.706950 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:16:39.715957 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:16:39.731605 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:16:39.743725 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:16:39.752184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:16:39.764288 systemd-journald[1111]: Time spent on flushing to /var/log/journal/12b53f9031a5441c9206224e35d5f0bb is 242.400ms for 930 entries. Jan 17 00:16:39.764288 systemd-journald[1111]: System Journal (/var/log/journal/12b53f9031a5441c9206224e35d5f0bb) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:16:40.077684 systemd-journald[1111]: Received client request to flush runtime journal. Jan 17 00:16:40.078349 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:16:40.079063 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:16:39.777644 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:16:39.794964 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:16:39.815950 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:16:39.834048 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:16:39.847336 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:16:39.859351 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:16:39.873387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:16:39.905091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:16:39.920174 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Jan 17 00:16:39.920203 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Jan 17 00:16:39.928805 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:16:39.940407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:16:39.996073 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:16:40.018873 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:16:40.045230 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:16:40.083052 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:16:40.107164 kernel: loop1: detected capacity change from 0 to 54824 Jan 17 00:16:40.103875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:16:40.105518 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:16:40.196705 kernel: loop2: detected capacity change from 0 to 224512 Jan 17 00:16:40.207212 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:16:40.228966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:16:40.308770 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jan 17 00:16:40.314455 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jan 17 00:16:40.325852 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:16:40.326846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:16:40.467628 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:16:40.536629 kernel: loop5: detected capacity change from 0 to 54824 Jan 17 00:16:40.576648 kernel: loop6: detected capacity change from 0 to 224512 Jan 17 00:16:40.634198 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 00:16:40.695937 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 00:16:40.697022 (sd-merge)[1169]: Merged extensions into '/usr'. Jan 17 00:16:40.714682 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:16:40.714712 systemd[1]: Reloading... Jan 17 00:16:40.953478 zram_generator::config[1194]: No configuration found. Jan 17 00:16:41.296340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:16:41.351348 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:16:41.412963 systemd[1]: Reloading finished in 697 ms. Jan 17 00:16:41.451363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:16:41.462459 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:16:41.476432 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:16:41.501981 systemd[1]: Starting ensure-sysext.service... Jan 17 00:16:41.521164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:16:41.542920 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:16:41.558861 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:16:41.558895 systemd[1]: Reloading... Jan 17 00:16:41.608408 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:16:41.609249 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:16:41.611379 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:16:41.612760 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Jan 17 00:16:41.614697 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 17 00:16:41.614878 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 17 00:16:41.621840 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:16:41.621874 systemd-tmpfiles[1237]: Skipping /boot Jan 17 00:16:41.662010 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:16:41.662055 systemd-tmpfiles[1237]: Skipping /boot Jan 17 00:16:41.695630 zram_generator::config[1264]: No configuration found. Jan 17 00:16:42.111138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:16:42.166483 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:16:42.166657 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1282) Jan 17 00:16:42.211638 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:16:42.243684 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 17 00:16:42.281404 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:16:42.282014 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:16:42.302086 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:16:42.303502 systemd[1]: Reloading finished in 743 ms. Jan 17 00:16:42.336763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:16:42.349624 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:16:42.362526 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:16:42.486161 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 17 00:16:42.511083 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:16:42.517255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:16:42.542432 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:16:42.554471 systemd[1]: Finished ensure-sysext.service. Jan 17 00:16:42.571997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:16:42.578036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:16:42.600971 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:16:42.614214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:16:42.622996 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:16:42.646066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:16:42.669533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:16:42.682809 lvm[1349]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:16:42.688956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:16:42.706002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:16:42.723988 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:16:42.726683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:16:42.736003 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:16:42.746969 augenrules[1363]: No rules Jan 17 00:16:42.750297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:16:42.773733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:16:42.795960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:16:42.805830 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:16:42.824188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:16:42.833988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:16:42.834298 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:16:42.837567 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:16:42.855873 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:16:42.868925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:16:42.869238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:16:42.879943 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:16:42.880232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:16:42.891609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:16:42.904353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:16:42.904670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:16:42.905560 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:16:42.905863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:16:42.914401 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:16:42.915075 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:16:42.933472 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:16:42.946889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:16:42.956511 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:16:42.965226 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 00:16:42.965410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:16:42.967761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:16:42.972954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:16:42.983031 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:16:42.983215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:16:42.984165 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:16:42.998334 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:16:43.084551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:16:43.094635 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:16:43.106511 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:16:43.116366 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 00:16:43.133157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:16:43.236295 systemd-networkd[1370]: lo: Link UP Jan 17 00:16:43.236318 systemd-networkd[1370]: lo: Gained carrier Jan 17 00:16:43.239452 systemd-networkd[1370]: Enumeration completed Jan 17 00:16:43.239780 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:16:43.240910 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:16:43.240927 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:16:43.242009 systemd-networkd[1370]: eth0: Link UP Jan 17 00:16:43.242027 systemd-networkd[1370]: eth0: Gained carrier Jan 17 00:16:43.242061 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:16:43.250428 systemd-resolved[1371]: Positive Trust Anchors: Jan 17 00:16:43.251073 systemd-resolved[1371]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:16:43.251158 systemd-resolved[1371]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:16:43.253713 systemd-networkd[1370]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:16:43.253750 systemd-networkd[1370]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:16:43.261152 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:16:43.263039 systemd-resolved[1371]: Defaulting to hostname 'linux'. Jan 17 00:16:43.273155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:16:43.285217 systemd[1]: Reached target network.target - Network. Jan 17 00:16:43.293897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:16:43.305971 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:16:43.317061 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:16:43.329052 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:16:43.341281 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:16:43.352125 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:16:43.363923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:16:43.375930 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:16:43.376004 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:16:43.384866 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:16:43.394690 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:16:43.407028 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:16:43.422161 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:16:43.433845 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:16:43.444073 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:16:43.453854 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:16:43.462984 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:16:43.463043 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:16:43.469878 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:16:43.494132 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:16:43.508967 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:16:43.554843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:16:43.576075 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:16:43.585839 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:16:43.594740 jq[1423]: false Jan 17 00:16:43.597013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:16:43.614756 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:16:43.632538 coreos-metadata[1421]: Jan 17 00:16:43.630 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 00:16:43.632865 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:16:43.637625 coreos-metadata[1421]: Jan 17 00:16:43.636 INFO Fetch successful Jan 17 00:16:43.638041 coreos-metadata[1421]: Jan 17 00:16:43.637 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 00:16:43.640511 coreos-metadata[1421]: Jan 17 00:16:43.639 INFO Fetch successful Jan 17 00:16:43.640511 coreos-metadata[1421]: Jan 17 00:16:43.639 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 00:16:43.646042 coreos-metadata[1421]: Jan 17 00:16:43.641 INFO Fetch successful Jan 17 00:16:43.646042 coreos-metadata[1421]: Jan 17 00:16:43.641 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 00:16:43.648784 coreos-metadata[1421]: Jan 17 00:16:43.646 INFO Fetch successful Jan 17 00:16:43.653918 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:16:43.675947 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:16:43.694291 extend-filesystems[1426]: Found loop4 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found loop5 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found loop6 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found loop7 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda1 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda2 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda3 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found usr Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda4 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda6 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda7 Jan 17 00:16:43.694291 extend-filesystems[1426]: Found sda9 Jan 17 00:16:43.694291 extend-filesystems[1426]: Checking size of /dev/sda9 Jan 17 00:16:43.972693 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 17 00:16:43.972786 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1304) Jan 17 00:16:43.972822 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 17 00:16:43.695851 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: ---------------------------------------------------- Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: available at https://www.nwtime.org/support Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: ---------------------------------------------------- Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: proto: precision = 0.086 usec (-23) Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: basedate set to 2026-01-04 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: gps base set to 2026-01-04 (week 2400) Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Listen normally on 3 eth0 10.128.0.35:123 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:23%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:23%2#123 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:23%2 Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:16:43.973056 ntpd[1428]: 17 Jan 00:16:43 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:16:43.736901 dbus-daemon[1422]: [system] SELinux support is enabled Jan 17 00:16:43.981308 extend-filesystems[1426]: Resized partition /dev/sda9 Jan 17 00:16:43.706641 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 00:16:43.741833 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:16:44.024506 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:16:44.024506 extend-filesystems[1451]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:16:44.024506 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:16:44.024506 extend-filesystems[1451]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 17 00:16:43.709755 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:16:43.741870 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:16:44.038346 extend-filesystems[1426]: Resized filesystem in /dev/sda9 Jan 17 00:16:43.718988 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:16:43.741886 ntpd[1428]: ---------------------------------------------------- Jan 17 00:16:43.739884 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:16:44.044204 jq[1447]: true Jan 17 00:16:43.741901 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:16:43.772074 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:16:44.051410 update_engine[1442]: I20260117 00:16:43.957479 1442 main.cc:92] Flatcar Update Engine starting Jan 17 00:16:44.051410 update_engine[1442]: I20260117 00:16:43.983603 1442 update_check_scheduler.cc:74] Next update check in 5m18s Jan 17 00:16:43.741917 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:16:43.805338 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:16:43.741932 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 17 00:16:43.805752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:16:43.741946 ntpd[1428]: available at https://www.nwtime.org/support Jan 17 00:16:43.806383 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:16:43.741963 ntpd[1428]: ---------------------------------------------------- Jan 17 00:16:43.806818 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:16:43.746921 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1370 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:16:43.823271 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:16:43.753626 ntpd[1428]: proto: precision = 0.086 usec (-23) Jan 17 00:16:43.824688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:16:43.758927 ntpd[1428]: basedate set to 2026-01-04 Jan 17 00:16:43.888390 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:16:43.758968 ntpd[1428]: gps base set to 2026-01-04 (week 2400) Jan 17 00:16:43.983049 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:16:43.775428 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:16:43.983104 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:16:43.775518 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:16:43.993712 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:16:43.778259 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:16:43.993762 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:16:43.778340 ntpd[1428]: Listen normally on 3 eth0 10.128.0.35:123 Jan 17 00:16:44.006463 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:16:43.778408 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 17 00:16:44.006853 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:16:43.778489 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:23%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:16:44.022786 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:16:43.778540 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:23%2#123 Jan 17 00:16:44.022818 systemd-logind[1440]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:16:43.778564 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:23%2 Jan 17 00:16:44.022851 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:16:43.779437 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 17 00:16:44.023427 systemd-logind[1440]: New seat seat0. Jan 17 00:16:43.785524 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:16:44.037173 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:16:43.785615 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:16:43.894091 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:16:44.119453 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:16:44.142753 jq[1465]: true Jan 17 00:16:44.153173 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:16:44.168335 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:16:44.176871 tar[1456]: linux-amd64/LICENSE Jan 17 00:16:44.177710 tar[1456]: linux-amd64/helm Jan 17 00:16:44.182734 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:16:44.195634 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:16:44.201865 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:16:44.222757 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:16:44.301122 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:16:44.324154 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:16:44.344209 systemd[1]: Started sshd@0-10.128.0.35:22-4.153.228.146:34286.service - OpenSSH per-connection server daemon (4.153.228.146:34286). Jan 17 00:16:44.380813 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:16:44.381172 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:16:44.400216 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:16:44.408322 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:16:44.419641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:16:44.448180 systemd[1]: Starting sshkeys.service... Jan 17 00:16:44.483012 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 17 00:16:44.512468 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:16:44.533521 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:16:44.556036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:44.576225 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:16:44.593235 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 00:16:44.604525 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:16:44.638786 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:16:44.657834 init.sh[1518]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 00:16:44.657834 init.sh[1518]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 00:16:44.657834 init.sh[1518]: + /usr/bin/google_instance_setup Jan 17 00:16:44.656411 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:16:44.668872 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:16:44.675967 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1477 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:16:44.679742 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:16:44.701709 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:16:44.713995 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:16:44.725117 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:16:44.743922 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:16:44.751215 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:16:44.761899 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:16:44.791176 coreos-metadata[1523]: Jan 17 00:16:44.789 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 00:16:44.801649 coreos-metadata[1523]: Jan 17 00:16:44.798 INFO Fetch failed with 404: resource not found Jan 17 00:16:44.801649 coreos-metadata[1523]: Jan 17 00:16:44.798 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 00:16:44.808794 coreos-metadata[1523]: Jan 17 00:16:44.807 INFO Fetch successful Jan 17 00:16:44.808794 coreos-metadata[1523]: Jan 17 00:16:44.807 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 00:16:44.808794 coreos-metadata[1523]: Jan 17 00:16:44.807 INFO Fetch failed with 404: resource not found Jan 17 00:16:44.808794 coreos-metadata[1523]: Jan 17 00:16:44.807 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 00:16:44.809683 coreos-metadata[1523]: Jan 17 00:16:44.809 INFO Fetch failed with 404: resource not found Jan 17 00:16:44.809683 coreos-metadata[1523]: Jan 17 00:16:44.809 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 00:16:44.811022 coreos-metadata[1523]: Jan 17 00:16:44.810 INFO Fetch successful Jan 17 00:16:44.819656 unknown[1523]: wrote ssh authorized keys file for user: core Jan 17 00:16:44.934316 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:16:44.931081 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:16:44.949047 systemd[1]: Finished sshkeys.service. Jan 17 00:16:44.976144 polkitd[1537]: Started polkitd version 121 Jan 17 00:16:45.011576 polkitd[1537]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:16:45.012755 polkitd[1537]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:16:45.018444 polkitd[1537]: Finished loading, compiling and executing 2 rules Jan 17 00:16:45.021402 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:16:45.021748 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:16:45.022217 polkitd[1537]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:16:45.060041 sshd[1501]: Accepted publickey for core from 4.153.228.146 port 34286 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:45.066260 sshd[1501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:45.088997 systemd-hostnamed[1477]: Hostname set to (transient) Jan 17 00:16:45.092176 systemd-resolved[1371]: System hostname changed to 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694'. Jan 17 00:16:45.106579 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:16:45.128821 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:16:45.153076 systemd-logind[1440]: New session 1 of user core. Jan 17 00:16:45.192894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:16:45.220863 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:16:45.252061 containerd[1466]: time="2026-01-17T00:16:45.248336087Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:16:45.280483 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:16:45.372195 containerd[1466]: time="2026-01-17T00:16:45.371786366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.379471813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.379543555Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.379573537Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.379886466Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.379923445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.380030443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.380053269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.380373745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.380403090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.380428439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381039 containerd[1466]: time="2026-01-17T00:16:45.380448329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381638 containerd[1466]: time="2026-01-17T00:16:45.380614708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.381638 containerd[1466]: time="2026-01-17T00:16:45.380972490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:16:45.382019 containerd[1466]: time="2026-01-17T00:16:45.381973256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:16:45.382180 containerd[1466]: time="2026-01-17T00:16:45.382155739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:16:45.382433 containerd[1466]: time="2026-01-17T00:16:45.382403106Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:16:45.382646 containerd[1466]: time="2026-01-17T00:16:45.382616224Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:16:45.396138 containerd[1466]: time="2026-01-17T00:16:45.396015529Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:16:45.399954 containerd[1466]: time="2026-01-17T00:16:45.396666511Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:16:45.399954 containerd[1466]: time="2026-01-17T00:16:45.396825888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:16:45.399954 containerd[1466]: time="2026-01-17T00:16:45.396865513Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:16:45.399954 containerd[1466]: time="2026-01-17T00:16:45.396897265Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:16:45.399954 containerd[1466]: time="2026-01-17T00:16:45.397166114Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.400696598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401005930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401036694Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401063025Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401092192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401117666Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401161413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401191488Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401218873Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401243485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401269458Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401292604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401332048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.401789 containerd[1466]: time="2026-01-17T00:16:45.401359594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401384530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401411285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401432081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401457768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401481844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401507599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401531275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401559778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401609577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401633991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401653874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.402490 containerd[1466]: time="2026-01-17T00:16:45.401697730Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403392498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403456841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403486278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403622932Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403765342Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403790153Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403813554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403831697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403855714Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403959410Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:16:45.405388 containerd[1466]: time="2026-01-17T00:16:45.403982955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:16:45.406103 containerd[1466]: time="2026-01-17T00:16:45.404474075Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:16:45.406103 containerd[1466]: time="2026-01-17T00:16:45.404625566Z" level=info msg="Connect containerd service" Jan 17 00:16:45.406103 containerd[1466]: time="2026-01-17T00:16:45.404723170Z" level=info msg="using legacy CRI server" Jan 17 00:16:45.406103 containerd[1466]: time="2026-01-17T00:16:45.404741041Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:16:45.406103 containerd[1466]: time="2026-01-17T00:16:45.404924462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:16:45.407773 containerd[1466]: time="2026-01-17T00:16:45.407702858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:16:45.408541 containerd[1466]: time="2026-01-17T00:16:45.408063727Z" level=info msg="Start subscribing containerd event" Jan 17 00:16:45.408541 containerd[1466]: time="2026-01-17T00:16:45.408165286Z" level=info msg="Start recovering state" Jan 17 00:16:45.408541 containerd[1466]: time="2026-01-17T00:16:45.408293965Z" level=info msg="Start event monitor" Jan 17 00:16:45.408541 containerd[1466]: time="2026-01-17T00:16:45.408328275Z" level=info msg="Start snapshots syncer" Jan 17 00:16:45.408541 containerd[1466]: time="2026-01-17T00:16:45.408345936Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:16:45.408541 containerd[1466]: time="2026-01-17T00:16:45.408358732Z" level=info msg="Start streaming server" Jan 17 00:16:45.409751 containerd[1466]: time="2026-01-17T00:16:45.409551888Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:16:45.409751 containerd[1466]: time="2026-01-17T00:16:45.409678016Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:16:45.410139 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:16:45.410884 containerd[1466]: time="2026-01-17T00:16:45.410359414Z" level=info msg="containerd successfully booted in 0.167021s" Jan 17 00:16:45.621655 systemd[1555]: Queued start job for default target default.target. Jan 17 00:16:45.630792 systemd[1555]: Created slice app.slice - User Application Slice. Jan 17 00:16:45.630854 systemd[1555]: Reached target paths.target - Paths. Jan 17 00:16:45.630886 systemd[1555]: Reached target timers.target - Timers. Jan 17 00:16:45.645977 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:16:45.678730 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:16:45.678998 systemd[1555]: Reached target sockets.target - Sockets. Jan 17 00:16:45.679026 systemd[1555]: Reached target basic.target - Basic System. Jan 17 00:16:45.679109 systemd[1555]: Reached target default.target - Main User Target. Jan 17 00:16:45.679173 systemd[1555]: Startup finished in 372ms. Jan 17 00:16:45.681856 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:16:45.703035 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:16:45.932244 systemd[1]: Started sshd@1-10.128.0.35:22-4.153.228.146:52276.service - OpenSSH per-connection server daemon (4.153.228.146:52276). Jan 17 00:16:46.169048 tar[1456]: linux-amd64/README.md Jan 17 00:16:46.214990 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:16:46.255443 instance-setup[1522]: INFO Running google_set_multiqueue. Jan 17 00:16:46.281944 instance-setup[1522]: INFO Set channels for eth0 to 2. Jan 17 00:16:46.289399 instance-setup[1522]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jan 17 00:16:46.292782 sshd[1569]: Accepted publickey for core from 4.153.228.146 port 52276 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:46.293673 instance-setup[1522]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jan 17 00:16:46.293969 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:46.293981 instance-setup[1522]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jan 17 00:16:46.298437 instance-setup[1522]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jan 17 00:16:46.298533 instance-setup[1522]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jan 17 00:16:46.302641 instance-setup[1522]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jan 17 00:16:46.302900 instance-setup[1522]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jan 17 00:16:46.307472 instance-setup[1522]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jan 17 00:16:46.310245 systemd-logind[1440]: New session 2 of user core. Jan 17 00:16:46.319201 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:16:46.324179 instance-setup[1522]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:16:46.331579 instance-setup[1522]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:16:46.336096 instance-setup[1522]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 00:16:46.336920 instance-setup[1522]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 00:16:46.376685 init.sh[1518]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 00:16:46.507947 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:46.519555 systemd[1]: sshd@1-10.128.0.35:22-4.153.228.146:52276.service: Deactivated successfully. Jan 17 00:16:46.525577 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:16:46.527479 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:16:46.530712 systemd-logind[1440]: Removed session 2. Jan 17 00:16:46.557136 systemd[1]: Started sshd@2-10.128.0.35:22-4.153.228.146:52292.service - OpenSSH per-connection server daemon (4.153.228.146:52292). Jan 17 00:16:46.634245 startup-script[1603]: INFO Starting startup scripts. Jan 17 00:16:46.643242 startup-script[1603]: INFO No startup scripts found in metadata. Jan 17 00:16:46.643423 startup-script[1603]: INFO Finished running startup scripts. Jan 17 00:16:46.683999 init.sh[1518]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 00:16:46.683999 init.sh[1518]: + daemon_pids=() Jan 17 00:16:46.684256 init.sh[1518]: + for d in accounts clock_skew network Jan 17 00:16:46.685077 init.sh[1518]: + daemon_pids+=($!) Jan 17 00:16:46.685077 init.sh[1518]: + for d in accounts clock_skew network Jan 17 00:16:46.685253 init.sh[1612]: + /usr/bin/google_accounts_daemon Jan 17 00:16:46.686265 init.sh[1613]: + /usr/bin/google_clock_skew_daemon Jan 17 00:16:46.688670 init.sh[1518]: + daemon_pids+=($!) Jan 17 00:16:46.688670 init.sh[1518]: + for d in accounts clock_skew network Jan 17 00:16:46.688670 init.sh[1518]: + daemon_pids+=($!) Jan 17 00:16:46.688670 init.sh[1518]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 00:16:46.688670 init.sh[1518]: + /usr/bin/systemd-notify --ready Jan 17 00:16:46.690447 init.sh[1614]: + /usr/bin/google_network_daemon Jan 17 00:16:46.716452 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 00:16:46.734523 init.sh[1518]: + wait -n 1612 1613 1614 Jan 17 00:16:46.744007 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:23%2]:123 Jan 17 00:16:46.745989 ntpd[1428]: 17 Jan 00:16:46 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:23%2]:123 Jan 17 00:16:46.835698 sshd[1610]: Accepted publickey for core from 4.153.228.146 port 52292 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:46.834437 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:46.857136 systemd-logind[1440]: New session 3 of user core. Jan 17 00:16:46.862108 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:16:47.050084 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:47.066887 systemd[1]: sshd@2-10.128.0.35:22-4.153.228.146:52292.service: Deactivated successfully. Jan 17 00:16:47.075539 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:16:47.082270 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:16:47.086666 systemd-logind[1440]: Removed session 3. Jan 17 00:16:47.194753 google-clock-skew[1613]: INFO Starting Google Clock Skew daemon. Jan 17 00:16:47.209630 google-clock-skew[1613]: INFO Clock drift token has changed: 0. Jan 17 00:16:47.264118 google-networking[1614]: INFO Starting Google Networking daemon. Jan 17 00:16:47.341118 groupadd[1628]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 00:16:47.348605 groupadd[1628]: group added to /etc/gshadow: name=google-sudoers Jan 17 00:16:47.417890 groupadd[1628]: new group: name=google-sudoers, GID=1000 Jan 17 00:16:47.456881 google-accounts[1612]: INFO Starting Google Accounts daemon. Jan 17 00:16:47.475443 google-accounts[1612]: WARNING OS Login not installed. Jan 17 00:16:47.477861 google-accounts[1612]: INFO Creating a new user account for 0. Jan 17 00:16:47.485841 init.sh[1636]: useradd: invalid user name '0': use --badname to ignore Jan 17 00:16:47.485887 google-accounts[1612]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 00:16:47.691968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:47.705074 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:16:47.713640 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:16:47.717061 systemd[1]: Startup finished in 1.215s (kernel) + 11.004s (initrd) + 10.997s (userspace) = 23.217s. Jan 17 00:16:48.000649 google-clock-skew[1613]: INFO Synced system time with hardware clock. Jan 17 00:16:48.001549 systemd-resolved[1371]: Clock change detected. Flushing caches. Jan 17 00:16:48.820231 kubelet[1643]: E0117 00:16:48.820138 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:16:48.823574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:16:48.823851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:16:48.824478 systemd[1]: kubelet.service: Consumed 1.418s CPU time. Jan 17 00:16:57.111075 systemd[1]: Started sshd@3-10.128.0.35:22-4.153.228.146:45042.service - OpenSSH per-connection server daemon (4.153.228.146:45042). Jan 17 00:16:57.347837 sshd[1655]: Accepted publickey for core from 4.153.228.146 port 45042 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:57.350985 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:57.362519 systemd-logind[1440]: New session 4 of user core. Jan 17 00:16:57.370018 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:16:57.522970 sshd[1655]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:57.528726 systemd[1]: sshd@3-10.128.0.35:22-4.153.228.146:45042.service: Deactivated successfully. Jan 17 00:16:57.531824 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:16:57.534477 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:16:57.536345 systemd-logind[1440]: Removed session 4. Jan 17 00:16:57.565844 systemd[1]: Started sshd@4-10.128.0.35:22-4.153.228.146:45056.service - OpenSSH per-connection server daemon (4.153.228.146:45056). Jan 17 00:16:57.802300 sshd[1662]: Accepted publickey for core from 4.153.228.146 port 45056 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:57.805289 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:57.813553 systemd-logind[1440]: New session 5 of user core. Jan 17 00:16:57.826937 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:16:57.972813 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:57.979554 systemd[1]: sshd@4-10.128.0.35:22-4.153.228.146:45056.service: Deactivated successfully. Jan 17 00:16:57.982396 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:16:57.983852 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:16:57.986324 systemd-logind[1440]: Removed session 5. Jan 17 00:16:58.021181 systemd[1]: Started sshd@5-10.128.0.35:22-4.153.228.146:45058.service - OpenSSH per-connection server daemon (4.153.228.146:45058). Jan 17 00:16:58.263840 sshd[1669]: Accepted publickey for core from 4.153.228.146 port 45058 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:58.266111 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:58.273816 systemd-logind[1440]: New session 6 of user core. Jan 17 00:16:58.281846 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:16:58.440286 sshd[1669]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:58.445486 systemd[1]: sshd@5-10.128.0.35:22-4.153.228.146:45058.service: Deactivated successfully. Jan 17 00:16:58.448722 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:16:58.451025 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:16:58.453303 systemd-logind[1440]: Removed session 6. Jan 17 00:16:58.486039 systemd[1]: Started sshd@6-10.128.0.35:22-4.153.228.146:45060.service - OpenSSH per-connection server daemon (4.153.228.146:45060). Jan 17 00:16:58.715737 sshd[1676]: Accepted publickey for core from 4.153.228.146 port 45060 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:58.717994 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:58.725831 systemd-logind[1440]: New session 7 of user core. Jan 17 00:16:58.732831 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:16:58.896181 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:16:58.896871 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:16:58.898423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:16:58.908656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:16:58.916788 sudo[1679]: pam_unix(sudo:session): session closed for user root Jan 17 00:16:58.951310 sshd[1676]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:58.959189 systemd[1]: sshd@6-10.128.0.35:22-4.153.228.146:45060.service: Deactivated successfully. Jan 17 00:16:58.962230 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:16:58.963968 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:16:58.968204 systemd-logind[1440]: Removed session 7. Jan 17 00:16:58.997192 systemd[1]: Started sshd@7-10.128.0.35:22-4.153.228.146:45076.service - OpenSSH per-connection server daemon (4.153.228.146:45076). Jan 17 00:16:59.234797 sshd[1687]: Accepted publickey for core from 4.153.228.146 port 45076 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:59.238009 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:59.252256 systemd-logind[1440]: New session 8 of user core. Jan 17 00:16:59.256961 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:16:59.306530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:16:59.325271 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:16:59.400927 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:16:59.402247 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:16:59.410493 kubelet[1695]: E0117 00:16:59.409373 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:16:59.411030 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 17 00:16:59.418228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:16:59.418795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:16:59.431213 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:16:59.431862 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:16:59.450027 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:16:59.465995 auditctl[1707]: No rules Jan 17 00:16:59.466803 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:16:59.467137 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:16:59.476280 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:16:59.519328 augenrules[1725]: No rules Jan 17 00:16:59.521387 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:16:59.523169 sudo[1702]: pam_unix(sudo:session): session closed for user root Jan 17 00:16:59.556105 sshd[1687]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:59.562827 systemd[1]: sshd@7-10.128.0.35:22-4.153.228.146:45076.service: Deactivated successfully. Jan 17 00:16:59.565850 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:16:59.567032 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:16:59.568897 systemd-logind[1440]: Removed session 8. Jan 17 00:16:59.603088 systemd[1]: Started sshd@8-10.128.0.35:22-4.153.228.146:45086.service - OpenSSH per-connection server daemon (4.153.228.146:45086). Jan 17 00:16:59.823161 sshd[1733]: Accepted publickey for core from 4.153.228.146 port 45086 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:16:59.825424 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:16:59.834580 systemd-logind[1440]: New session 9 of user core. Jan 17 00:16:59.840948 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:16:59.973505 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:16:59.974130 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:17:00.489074 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:17:00.500404 (dockerd)[1751]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:17:01.025728 dockerd[1751]: time="2026-01-17T00:17:01.025621762Z" level=info msg="Starting up" Jan 17 00:17:01.188210 systemd[1]: var-lib-docker-metacopy\x2dcheck2561792164-merged.mount: Deactivated successfully. Jan 17 00:17:01.214315 dockerd[1751]: time="2026-01-17T00:17:01.213490199Z" level=info msg="Loading containers: start." Jan 17 00:17:01.426893 kernel: Initializing XFRM netlink socket Jan 17 00:17:01.578748 systemd-networkd[1370]: docker0: Link UP Jan 17 00:17:01.608284 dockerd[1751]: time="2026-01-17T00:17:01.608218625Z" level=info msg="Loading containers: done." Jan 17 00:17:01.636715 dockerd[1751]: time="2026-01-17T00:17:01.635055007Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:17:01.637265 dockerd[1751]: time="2026-01-17T00:17:01.636925798Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:17:01.637265 dockerd[1751]: time="2026-01-17T00:17:01.637226992Z" level=info msg="Daemon has completed initialization" Jan 17 00:17:01.639847 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck110926431-merged.mount: Deactivated successfully. Jan 17 00:17:01.699810 dockerd[1751]: time="2026-01-17T00:17:01.699420870Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:17:01.699885 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:17:02.852483 containerd[1466]: time="2026-01-17T00:17:02.852359305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:17:03.427639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928133614.mount: Deactivated successfully. Jan 17 00:17:05.349552 containerd[1466]: time="2026-01-17T00:17:05.349469716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:05.351595 containerd[1466]: time="2026-01-17T00:17:05.351501524Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29078734" Jan 17 00:17:05.353271 containerd[1466]: time="2026-01-17T00:17:05.353169389Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:05.362526 containerd[1466]: time="2026-01-17T00:17:05.360306959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:05.363944 containerd[1466]: time="2026-01-17T00:17:05.363859594Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.511419997s" Jan 17 00:17:05.364274 containerd[1466]: time="2026-01-17T00:17:05.364237415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:17:05.365495 containerd[1466]: time="2026-01-17T00:17:05.365403391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:17:07.075390 containerd[1466]: time="2026-01-17T00:17:07.075280522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:07.077142 containerd[1466]: time="2026-01-17T00:17:07.077045691Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24995412" Jan 17 00:17:07.079247 containerd[1466]: time="2026-01-17T00:17:07.079122609Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:07.085588 containerd[1466]: time="2026-01-17T00:17:07.084761657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:07.086947 containerd[1466]: time="2026-01-17T00:17:07.086666664Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.721204594s" Jan 17 00:17:07.086947 containerd[1466]: time="2026-01-17T00:17:07.086755939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:17:07.087581 containerd[1466]: time="2026-01-17T00:17:07.087537775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:17:08.590143 containerd[1466]: time="2026-01-17T00:17:08.590048387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:08.592296 containerd[1466]: time="2026-01-17T00:17:08.591952447Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19407116" Jan 17 00:17:08.594585 containerd[1466]: time="2026-01-17T00:17:08.593862555Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:08.599193 containerd[1466]: time="2026-01-17T00:17:08.599106189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:08.601025 containerd[1466]: time="2026-01-17T00:17:08.600932380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.5133311s" Jan 17 00:17:08.601317 containerd[1466]: time="2026-01-17T00:17:08.601284782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:17:08.602675 containerd[1466]: time="2026-01-17T00:17:08.602604313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:17:09.661026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:17:09.674685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:10.058040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:10.068980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087932563.mount: Deactivated successfully. Jan 17 00:17:10.080699 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:17:10.181968 kubelet[1969]: E0117 00:17:10.181843 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:17:10.187012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:17:10.187305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:17:10.968071 containerd[1466]: time="2026-01-17T00:17:10.967977881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:10.970341 containerd[1466]: time="2026-01-17T00:17:10.969819837Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31163922" Jan 17 00:17:10.974385 containerd[1466]: time="2026-01-17T00:17:10.972606052Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:10.978052 containerd[1466]: time="2026-01-17T00:17:10.977970351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:10.979788 containerd[1466]: time="2026-01-17T00:17:10.979710239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.376577156s" Jan 17 00:17:10.979788 containerd[1466]: time="2026-01-17T00:17:10.979796939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:17:10.981505 containerd[1466]: time="2026-01-17T00:17:10.981421091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:17:11.456362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164390227.mount: Deactivated successfully. Jan 17 00:17:12.930052 containerd[1466]: time="2026-01-17T00:17:12.929962865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:12.932586 containerd[1466]: time="2026-01-17T00:17:12.931957401Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18572327" Jan 17 00:17:12.935481 containerd[1466]: time="2026-01-17T00:17:12.934761204Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:12.946122 containerd[1466]: time="2026-01-17T00:17:12.946036548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:12.947911 containerd[1466]: time="2026-01-17T00:17:12.947828292Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.966077711s" Jan 17 00:17:12.947911 containerd[1466]: time="2026-01-17T00:17:12.947913174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:17:12.949526 containerd[1466]: time="2026-01-17T00:17:12.949438947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:17:13.496802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628506679.mount: Deactivated successfully. Jan 17 00:17:13.509250 containerd[1466]: time="2026-01-17T00:17:13.509157830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:13.511044 containerd[1466]: time="2026-01-17T00:17:13.510934830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322136" Jan 17 00:17:13.514491 containerd[1466]: time="2026-01-17T00:17:13.512697969Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:13.517308 containerd[1466]: time="2026-01-17T00:17:13.517202532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:13.519049 containerd[1466]: time="2026-01-17T00:17:13.518979135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 569.203831ms" Jan 17 00:17:13.519049 containerd[1466]: time="2026-01-17T00:17:13.519050759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:17:13.520500 containerd[1466]: time="2026-01-17T00:17:13.520393763Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:17:13.994821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380155926.mount: Deactivated successfully. Jan 17 00:17:15.112391 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:17:16.751861 containerd[1466]: time="2026-01-17T00:17:16.751757602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:16.754347 containerd[1466]: time="2026-01-17T00:17:16.753861432Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57690069" Jan 17 00:17:16.757875 containerd[1466]: time="2026-01-17T00:17:16.757732378Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:16.762059 containerd[1466]: time="2026-01-17T00:17:16.761950480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:16.765558 containerd[1466]: time="2026-01-17T00:17:16.763730303Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.243273492s" Jan 17 00:17:16.765558 containerd[1466]: time="2026-01-17T00:17:16.763813516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:17:20.411143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:17:20.420996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:20.772990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:20.787847 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:17:20.883673 kubelet[2121]: E0117 00:17:20.883596 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:17:20.887689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:17:20.887976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:17:20.917670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:20.926044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:20.999266 systemd[1]: Reloading requested from client PID 2136 ('systemctl') (unit session-9.scope)... Jan 17 00:17:20.999300 systemd[1]: Reloading... Jan 17 00:17:21.204507 zram_generator::config[2173]: No configuration found. Jan 17 00:17:21.411140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:21.529953 systemd[1]: Reloading finished in 529 ms. Jan 17 00:17:21.623360 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:21.627252 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:17:21.627690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:21.633014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:21.983980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:21.998337 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:17:22.072781 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:22.072781 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:17:22.072781 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:22.073497 kubelet[2229]: I0117 00:17:22.072889 2229 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:17:22.604382 kubelet[2229]: I0117 00:17:22.604148 2229 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:17:22.604382 kubelet[2229]: I0117 00:17:22.604270 2229 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:17:22.606042 kubelet[2229]: I0117 00:17:22.605969 2229 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:17:22.663379 kubelet[2229]: E0117 00:17:22.663295 2229 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:22.665370 kubelet[2229]: I0117 00:17:22.665047 2229 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:17:22.676948 kubelet[2229]: E0117 00:17:22.676185 2229 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:17:22.676948 kubelet[2229]: I0117 00:17:22.676246 2229 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:17:22.684310 kubelet[2229]: I0117 00:17:22.684230 2229 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:17:22.688864 kubelet[2229]: I0117 00:17:22.688734 2229 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:17:22.689150 kubelet[2229]: I0117 00:17:22.688843 2229 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:17:22.689404 kubelet[2229]: I0117 00:17:22.689170 2229 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:17:22.689404 kubelet[2229]: I0117 00:17:22.689192 2229 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:17:22.689551 kubelet[2229]: I0117 00:17:22.689493 2229 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:22.700070 kubelet[2229]: I0117 00:17:22.699979 2229 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:17:22.700070 kubelet[2229]: I0117 00:17:22.700062 2229 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:17:22.700691 kubelet[2229]: I0117 00:17:22.700100 2229 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:17:22.700691 kubelet[2229]: I0117 00:17:22.700120 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:17:22.718496 kubelet[2229]: I0117 00:17:22.717545 2229 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:17:22.718496 kubelet[2229]: I0117 00:17:22.718422 2229 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:17:22.721568 kubelet[2229]: W0117 00:17:22.721509 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:17:22.722976 kubelet[2229]: W0117 00:17:22.722880 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:22.723198 kubelet[2229]: E0117 00:17:22.722997 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:22.724306 kubelet[2229]: W0117 00:17:22.721642 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:22.724515 kubelet[2229]: E0117 00:17:22.724363 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:22.728067 kubelet[2229]: I0117 00:17:22.728005 2229 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:17:22.728949 kubelet[2229]: I0117 00:17:22.728368 2229 server.go:1287] "Started kubelet" Jan 17 00:17:22.740336 kubelet[2229]: I0117 00:17:22.739335 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:17:22.746308 kubelet[2229]: E0117 00:17:22.743346 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694.188b5c8dca5d4f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,UID:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,},FirstTimestamp:2026-01-17 00:17:22.728308541 +0000 UTC m=+0.723574600,LastTimestamp:2026-01-17 00:17:22.728308541 +0000 UTC m=+0.723574600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,}" Jan 17 00:17:22.748509 kubelet[2229]: I0117 00:17:22.747645 2229 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:17:22.748509 kubelet[2229]: E0117 00:17:22.748127 2229 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" Jan 17 00:17:22.749031 kubelet[2229]: I0117 00:17:22.748997 2229 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:17:22.749155 kubelet[2229]: I0117 00:17:22.749138 2229 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:17:22.749379 kubelet[2229]: I0117 00:17:22.749283 2229 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:17:22.752676 kubelet[2229]: I0117 00:17:22.752626 2229 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:17:22.754912 kubelet[2229]: I0117 00:17:22.750925 2229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:17:22.755782 kubelet[2229]: E0117 00:17:22.753104 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="200ms" Jan 17 00:17:22.755782 kubelet[2229]: I0117 00:17:22.749582 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:17:22.756482 kubelet[2229]: I0117 00:17:22.756105 2229 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:17:22.756482 kubelet[2229]: I0117 00:17:22.755369 2229 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:17:22.756482 kubelet[2229]: I0117 00:17:22.756309 2229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:17:22.756482 kubelet[2229]: W0117 00:17:22.755563 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:22.756828 kubelet[2229]: E0117 00:17:22.756797 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:22.759170 kubelet[2229]: I0117 00:17:22.759090 2229 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:17:22.768502 kubelet[2229]: E0117 00:17:22.767709 2229 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:17:22.788056 kubelet[2229]: I0117 00:17:22.788006 2229 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:17:22.788056 kubelet[2229]: I0117 00:17:22.788043 2229 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:17:22.788328 kubelet[2229]: I0117 00:17:22.788100 2229 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:22.794811 kubelet[2229]: I0117 00:17:22.794702 2229 policy_none.go:49] "None policy: Start" Jan 17 00:17:22.794811 kubelet[2229]: I0117 00:17:22.794759 2229 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:17:22.794811 kubelet[2229]: I0117 00:17:22.794785 2229 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:17:22.796267 kubelet[2229]: I0117 00:17:22.795916 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:17:22.806294 kubelet[2229]: I0117 00:17:22.806099 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:17:22.806998 kubelet[2229]: I0117 00:17:22.806704 2229 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:17:22.808811 kubelet[2229]: I0117 00:17:22.808762 2229 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:17:22.808811 kubelet[2229]: I0117 00:17:22.808806 2229 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:17:22.809052 kubelet[2229]: E0117 00:17:22.808915 2229 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:17:22.811377 kubelet[2229]: W0117 00:17:22.811185 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:22.811377 kubelet[2229]: E0117 00:17:22.811252 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:22.811830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:17:22.829754 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:17:22.838531 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:17:22.848803 kubelet[2229]: E0117 00:17:22.848633 2229 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" Jan 17 00:17:22.849480 kubelet[2229]: I0117 00:17:22.849429 2229 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:17:22.850282 kubelet[2229]: I0117 00:17:22.849816 2229 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:17:22.850282 kubelet[2229]: I0117 00:17:22.849842 2229 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:17:22.850282 kubelet[2229]: I0117 00:17:22.850190 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:17:22.853899 kubelet[2229]: E0117 00:17:22.853860 2229 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:17:22.854274 kubelet[2229]: E0117 00:17:22.854233 2229 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" Jan 17 00:17:22.935995 systemd[1]: Created slice kubepods-burstable-podc63f6aadae8432c94c273e8f3dcc61ed.slice - libcontainer container kubepods-burstable-podc63f6aadae8432c94c273e8f3dcc61ed.slice. Jan 17 00:17:22.950843 kubelet[2229]: E0117 00:17:22.950679 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:22.955721 kubelet[2229]: I0117 00:17:22.955508 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:22.956540 kubelet[2229]: E0117 00:17:22.956339 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="400ms" Jan 17 00:17:22.956540 kubelet[2229]: E0117 00:17:22.956497 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:22.960222 systemd[1]: Created slice kubepods-burstable-podce5f651fe769927d9adaa239e2ce7aad.slice - libcontainer container kubepods-burstable-podce5f651fe769927d9adaa239e2ce7aad.slice. Jan 17 00:17:22.965381 kubelet[2229]: E0117 00:17:22.964091 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:22.967850 systemd[1]: Created slice kubepods-burstable-poddf084941aa0ec5356fb565f72af4050f.slice - libcontainer container kubepods-burstable-poddf084941aa0ec5356fb565f72af4050f.slice. Jan 17 00:17:22.971111 kubelet[2229]: E0117 00:17:22.971046 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.050160 kubelet[2229]: I0117 00:17:23.049963 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c63f6aadae8432c94c273e8f3dcc61ed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"c63f6aadae8432c94c273e8f3dcc61ed\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.050877 kubelet[2229]: I0117 00:17:23.050083 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.050877 kubelet[2229]: I0117 00:17:23.050717 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df084941aa0ec5356fb565f72af4050f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"df084941aa0ec5356fb565f72af4050f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.050877 kubelet[2229]: I0117 00:17:23.050829 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.051085 kubelet[2229]: I0117 00:17:23.050869 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.051085 kubelet[2229]: I0117 00:17:23.050985 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.051085 kubelet[2229]: I0117 00:17:23.051029 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c63f6aadae8432c94c273e8f3dcc61ed-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"c63f6aadae8432c94c273e8f3dcc61ed\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.051085 kubelet[2229]: I0117 00:17:23.051072 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c63f6aadae8432c94c273e8f3dcc61ed-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"c63f6aadae8432c94c273e8f3dcc61ed\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.051365 kubelet[2229]: I0117 00:17:23.051112 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.161578 kubelet[2229]: I0117 00:17:23.161279 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.162552 kubelet[2229]: E0117 00:17:23.161991 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.252950 containerd[1466]: time="2026-01-17T00:17:23.252723186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,Uid:c63f6aadae8432c94c273e8f3dcc61ed,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:23.269610 containerd[1466]: time="2026-01-17T00:17:23.269520066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,Uid:ce5f651fe769927d9adaa239e2ce7aad,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:23.273361 containerd[1466]: time="2026-01-17T00:17:23.273280481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,Uid:df084941aa0ec5356fb565f72af4050f,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:23.357880 kubelet[2229]: E0117 00:17:23.357753 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="800ms" Jan 17 00:17:23.568834 kubelet[2229]: I0117 00:17:23.568642 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.569320 kubelet[2229]: E0117 00:17:23.569239 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:23.684863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838164821.mount: Deactivated successfully. Jan 17 00:17:23.696697 containerd[1466]: time="2026-01-17T00:17:23.696531875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:23.699041 containerd[1466]: time="2026-01-17T00:17:23.698947423Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:23.700728 containerd[1466]: time="2026-01-17T00:17:23.700629506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313054" Jan 17 00:17:23.702898 containerd[1466]: time="2026-01-17T00:17:23.702435005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:17:23.705438 containerd[1466]: time="2026-01-17T00:17:23.705251764Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:23.708071 containerd[1466]: time="2026-01-17T00:17:23.707782000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:23.708071 containerd[1466]: time="2026-01-17T00:17:23.707914485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:17:23.719783 containerd[1466]: time="2026-01-17T00:17:23.719698123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:23.723193 containerd[1466]: time="2026-01-17T00:17:23.723115818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 449.717731ms" Jan 17 00:17:23.726548 containerd[1466]: time="2026-01-17T00:17:23.726401820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 473.555533ms" Jan 17 00:17:23.726845 containerd[1466]: time="2026-01-17T00:17:23.726790723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 457.12262ms" Jan 17 00:17:23.741234 kubelet[2229]: W0117 00:17:23.741104 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:23.741234 kubelet[2229]: E0117 00:17:23.741216 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:23.763676 kubelet[2229]: W0117 00:17:23.762193 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:23.763676 kubelet[2229]: E0117 00:17:23.762337 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:23.970619 containerd[1466]: time="2026-01-17T00:17:23.970199383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:23.973917 containerd[1466]: time="2026-01-17T00:17:23.973686267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:23.974323 containerd[1466]: time="2026-01-17T00:17:23.973866098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:23.976883 containerd[1466]: time="2026-01-17T00:17:23.975467622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:23.987878 containerd[1466]: time="2026-01-17T00:17:23.987359145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:23.990747 containerd[1466]: time="2026-01-17T00:17:23.990608632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:23.991020 containerd[1466]: time="2026-01-17T00:17:23.990794329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:23.993436 containerd[1466]: time="2026-01-17T00:17:23.992977522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:23.993436 containerd[1466]: time="2026-01-17T00:17:23.993072310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:23.993436 containerd[1466]: time="2026-01-17T00:17:23.993097350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:23.993436 containerd[1466]: time="2026-01-17T00:17:23.993239399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:23.993436 containerd[1466]: time="2026-01-17T00:17:23.992640096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:24.041722 systemd[1]: Started cri-containerd-46f75bc7d4c9d253bd9ee3ed92b5d4b580756aef0647ef6f1e47741daea4f329.scope - libcontainer container 46f75bc7d4c9d253bd9ee3ed92b5d4b580756aef0647ef6f1e47741daea4f329. Jan 17 00:17:24.055999 systemd[1]: Started cri-containerd-1b1c71cf8450f164cea52e6cd2b8d49335e81b1e54ba947474528cb749ae4981.scope - libcontainer container 1b1c71cf8450f164cea52e6cd2b8d49335e81b1e54ba947474528cb749ae4981. Jan 17 00:17:24.071434 systemd[1]: Started cri-containerd-57ae299fc71d0390c33816ae537a769ce847888e513ff10c42b217ad91e4769c.scope - libcontainer container 57ae299fc71d0390c33816ae537a769ce847888e513ff10c42b217ad91e4769c. Jan 17 00:17:24.159273 kubelet[2229]: E0117 00:17:24.159072 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="1.6s" Jan 17 00:17:24.202018 containerd[1466]: time="2026-01-17T00:17:24.201938651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,Uid:c63f6aadae8432c94c273e8f3dcc61ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b1c71cf8450f164cea52e6cd2b8d49335e81b1e54ba947474528cb749ae4981\"" Jan 17 00:17:24.208973 containerd[1466]: time="2026-01-17T00:17:24.207985823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,Uid:ce5f651fe769927d9adaa239e2ce7aad,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ae299fc71d0390c33816ae537a769ce847888e513ff10c42b217ad91e4769c\"" Jan 17 00:17:24.210908 kubelet[2229]: E0117 00:17:24.210791 2229 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e4" Jan 17 00:17:24.223524 kubelet[2229]: E0117 00:17:24.222064 2229 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fe" Jan 17 00:17:24.223723 containerd[1466]: time="2026-01-17T00:17:24.222987877Z" level=info msg="CreateContainer within sandbox \"1b1c71cf8450f164cea52e6cd2b8d49335e81b1e54ba947474528cb749ae4981\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:17:24.224957 containerd[1466]: time="2026-01-17T00:17:24.224378578Z" level=info msg="CreateContainer within sandbox \"57ae299fc71d0390c33816ae537a769ce847888e513ff10c42b217ad91e4769c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:17:24.242686 containerd[1466]: time="2026-01-17T00:17:24.242550900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,Uid:df084941aa0ec5356fb565f72af4050f,Namespace:kube-system,Attempt:0,} returns sandbox id \"46f75bc7d4c9d253bd9ee3ed92b5d4b580756aef0647ef6f1e47741daea4f329\"" Jan 17 00:17:24.248036 kubelet[2229]: E0117 00:17:24.247951 2229 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e4" Jan 17 00:17:24.253589 containerd[1466]: time="2026-01-17T00:17:24.253395639Z" level=info msg="CreateContainer within sandbox \"46f75bc7d4c9d253bd9ee3ed92b5d4b580756aef0647ef6f1e47741daea4f329\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:17:24.255105 kubelet[2229]: W0117 00:17:24.254795 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:24.255105 kubelet[2229]: E0117 00:17:24.254924 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:24.285388 containerd[1466]: time="2026-01-17T00:17:24.284996055Z" level=info msg="CreateContainer within sandbox \"57ae299fc71d0390c33816ae537a769ce847888e513ff10c42b217ad91e4769c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b17b8d5b43fdd0946d8cf4cd5f5271b3243089ac86b40762d86af6c30e8f8716\"" Jan 17 00:17:24.287224 containerd[1466]: time="2026-01-17T00:17:24.286475521Z" level=info msg="StartContainer for \"b17b8d5b43fdd0946d8cf4cd5f5271b3243089ac86b40762d86af6c30e8f8716\"" Jan 17 00:17:24.292266 containerd[1466]: time="2026-01-17T00:17:24.292142582Z" level=info msg="CreateContainer within sandbox \"1b1c71cf8450f164cea52e6cd2b8d49335e81b1e54ba947474528cb749ae4981\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd801d6462d9a289f170c399421d9a6983e3e641ee33a7ebb0b36b5bcfc935b9\"" Jan 17 00:17:24.294661 containerd[1466]: time="2026-01-17T00:17:24.294545481Z" level=info msg="StartContainer for \"dd801d6462d9a289f170c399421d9a6983e3e641ee33a7ebb0b36b5bcfc935b9\"" Jan 17 00:17:24.298884 containerd[1466]: time="2026-01-17T00:17:24.298693828Z" level=info msg="CreateContainer within sandbox \"46f75bc7d4c9d253bd9ee3ed92b5d4b580756aef0647ef6f1e47741daea4f329\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fc884e67314e9eb4d8ef937402f805307d487ec1986e0f34be11a56480b9b83\"" Jan 17 00:17:24.300022 containerd[1466]: time="2026-01-17T00:17:24.299957872Z" level=info msg="StartContainer for \"9fc884e67314e9eb4d8ef937402f805307d487ec1986e0f34be11a56480b9b83\"" Jan 17 00:17:24.328124 kubelet[2229]: W0117 00:17:24.327947 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Jan 17 00:17:24.328124 kubelet[2229]: E0117 00:17:24.328085 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:17:24.369586 systemd[1]: Started cri-containerd-dd801d6462d9a289f170c399421d9a6983e3e641ee33a7ebb0b36b5bcfc935b9.scope - libcontainer container dd801d6462d9a289f170c399421d9a6983e3e641ee33a7ebb0b36b5bcfc935b9. Jan 17 00:17:24.384741 kubelet[2229]: I0117 00:17:24.384644 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:24.385303 kubelet[2229]: E0117 00:17:24.385234 2229 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:24.388421 systemd[1]: Started cri-containerd-9fc884e67314e9eb4d8ef937402f805307d487ec1986e0f34be11a56480b9b83.scope - libcontainer container 9fc884e67314e9eb4d8ef937402f805307d487ec1986e0f34be11a56480b9b83. Jan 17 00:17:24.397318 systemd[1]: Started cri-containerd-b17b8d5b43fdd0946d8cf4cd5f5271b3243089ac86b40762d86af6c30e8f8716.scope - libcontainer container b17b8d5b43fdd0946d8cf4cd5f5271b3243089ac86b40762d86af6c30e8f8716. Jan 17 00:17:24.528689 containerd[1466]: time="2026-01-17T00:17:24.527941717Z" level=info msg="StartContainer for \"b17b8d5b43fdd0946d8cf4cd5f5271b3243089ac86b40762d86af6c30e8f8716\" returns successfully" Jan 17 00:17:24.541358 containerd[1466]: time="2026-01-17T00:17:24.541284320Z" level=info msg="StartContainer for \"dd801d6462d9a289f170c399421d9a6983e3e641ee33a7ebb0b36b5bcfc935b9\" returns successfully" Jan 17 00:17:24.603562 containerd[1466]: time="2026-01-17T00:17:24.602686490Z" level=info msg="StartContainer for \"9fc884e67314e9eb4d8ef937402f805307d487ec1986e0f34be11a56480b9b83\" returns successfully" Jan 17 00:17:24.836547 kubelet[2229]: E0117 00:17:24.836313 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:24.848169 kubelet[2229]: E0117 00:17:24.847682 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:24.851951 kubelet[2229]: E0117 00:17:24.851758 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:25.855263 kubelet[2229]: E0117 00:17:25.855200 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:25.856753 kubelet[2229]: E0117 00:17:25.856702 2229 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:25.995787 kubelet[2229]: I0117 00:17:25.995677 2229 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:29.516680 update_engine[1442]: I20260117 00:17:29.516546 1442 update_attempter.cc:509] Updating boot flags... Jan 17 00:17:29.675890 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2516) Jan 17 00:17:29.946789 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2520) Jan 17 00:17:30.189645 kubelet[2229]: E0117 00:17:30.189580 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.199524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2520) Jan 17 00:17:30.304994 kubelet[2229]: E0117 00:17:30.303999 2229 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694.188b5c8dca5d4f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,UID:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,},FirstTimestamp:2026-01-17 00:17:22.728308541 +0000 UTC m=+0.723574600,LastTimestamp:2026-01-17 00:17:22.728308541 +0000 UTC m=+0.723574600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,}" Jan 17 00:17:30.333825 kubelet[2229]: I0117 00:17:30.333757 2229 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.355244 kubelet[2229]: I0117 00:17:30.353229 2229 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.376803 kubelet[2229]: E0117 00:17:30.375984 2229 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694.188b5c8dccb60fdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,UID:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,},FirstTimestamp:2026-01-17 00:17:22.767679455 +0000 UTC m=+0.762945497,LastTimestamp:2026-01-17 00:17:22.767679455 +0000 UTC m=+0.762945497,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694,}" Jan 17 00:17:30.399487 kubelet[2229]: E0117 00:17:30.398056 2229 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.399487 kubelet[2229]: I0117 00:17:30.398121 2229 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.405215 kubelet[2229]: E0117 00:17:30.405158 2229 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.405776 kubelet[2229]: I0117 00:17:30.405481 2229 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.412955 kubelet[2229]: E0117 00:17:30.412879 2229 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.644537 kubelet[2229]: I0117 00:17:30.644266 2229 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.652163 kubelet[2229]: E0117 00:17:30.652093 2229 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:30.711490 kubelet[2229]: I0117 00:17:30.709571 2229 apiserver.go:52] "Watching apiserver" Jan 17 00:17:30.749683 kubelet[2229]: I0117 00:17:30.749614 2229 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:17:31.412397 kubelet[2229]: I0117 00:17:31.412321 2229 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:31.428517 kubelet[2229]: W0117 00:17:31.427706 2229 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:17:32.498925 systemd[1]: Reloading requested from client PID 2529 ('systemctl') (unit session-9.scope)... Jan 17 00:17:32.498952 systemd[1]: Reloading... Jan 17 00:17:32.677687 zram_generator::config[2569]: No configuration found. Jan 17 00:17:32.914256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:33.057276 systemd[1]: Reloading finished in 557 ms. Jan 17 00:17:33.122913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:33.146688 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:17:33.147108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:33.147217 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 133.1M memory peak, 0B memory swap peak. Jan 17 00:17:33.154157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:33.550950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:33.567747 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:17:33.687267 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:33.687267 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:17:33.687267 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:33.688030 kubelet[2619]: I0117 00:17:33.687497 2619 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:17:33.721768 kubelet[2619]: I0117 00:17:33.719161 2619 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:17:33.721768 kubelet[2619]: I0117 00:17:33.719219 2619 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:17:33.721768 kubelet[2619]: I0117 00:17:33.720309 2619 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:17:33.739765 kubelet[2619]: I0117 00:17:33.739683 2619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:17:33.750158 kubelet[2619]: I0117 00:17:33.750096 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:17:33.759356 kubelet[2619]: E0117 00:17:33.759300 2619 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:17:33.759870 kubelet[2619]: I0117 00:17:33.759847 2619 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:17:33.766584 kubelet[2619]: I0117 00:17:33.766443 2619 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:17:33.768294 kubelet[2619]: I0117 00:17:33.767009 2619 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:17:33.768294 kubelet[2619]: I0117 00:17:33.767098 2619 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:17:33.768294 kubelet[2619]: I0117 00:17:33.767735 2619 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:17:33.768294 kubelet[2619]: I0117 00:17:33.767782 2619 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:17:33.768961 kubelet[2619]: I0117 00:17:33.767977 2619 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:33.768961 kubelet[2619]: I0117 00:17:33.768298 2619 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:17:33.770011 kubelet[2619]: I0117 00:17:33.769399 2619 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:17:33.770011 kubelet[2619]: I0117 00:17:33.769522 2619 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:17:33.770011 kubelet[2619]: I0117 00:17:33.769611 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:17:33.779764 kubelet[2619]: I0117 00:17:33.779524 2619 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:17:33.783292 kubelet[2619]: I0117 00:17:33.783243 2619 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:17:33.785489 kubelet[2619]: I0117 00:17:33.785371 2619 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:17:33.785489 kubelet[2619]: I0117 00:17:33.785477 2619 server.go:1287] "Started kubelet" Jan 17 00:17:33.798492 kubelet[2619]: I0117 00:17:33.795172 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:17:33.812088 kubelet[2619]: I0117 00:17:33.811417 2619 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:17:33.823568 kubelet[2619]: I0117 00:17:33.823380 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:17:33.824704 kubelet[2619]: I0117 00:17:33.824664 2619 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:17:33.829509 kubelet[2619]: I0117 00:17:33.829251 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:17:33.840428 kubelet[2619]: I0117 00:17:33.839373 2619 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:17:33.843425 kubelet[2619]: E0117 00:17:33.841930 2619 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" not found" Jan 17 00:17:33.851965 kubelet[2619]: I0117 00:17:33.851923 2619 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:17:33.859499 kubelet[2619]: I0117 00:17:33.852558 2619 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:17:33.859499 kubelet[2619]: I0117 00:17:33.858731 2619 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:17:33.871122 kubelet[2619]: I0117 00:17:33.871060 2619 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:17:33.872980 kubelet[2619]: I0117 00:17:33.871274 2619 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:17:33.885897 kubelet[2619]: I0117 00:17:33.885732 2619 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:17:33.895281 kubelet[2619]: I0117 00:17:33.894822 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:17:33.905329 kubelet[2619]: E0117 00:17:33.905266 2619 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:17:33.908756 kubelet[2619]: I0117 00:17:33.908702 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:17:33.908756 kubelet[2619]: I0117 00:17:33.908770 2619 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:17:33.909053 kubelet[2619]: I0117 00:17:33.908803 2619 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:17:33.909053 kubelet[2619]: I0117 00:17:33.908815 2619 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:17:33.909053 kubelet[2619]: E0117 00:17:33.908903 2619 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:17:34.009769 kubelet[2619]: E0117 00:17:34.009624 2619 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:17:34.016265 kubelet[2619]: I0117 00:17:34.016221 2619 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:17:34.016586 kubelet[2619]: I0117 00:17:34.016251 2619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:17:34.016586 kubelet[2619]: I0117 00:17:34.016341 2619 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:34.017993 kubelet[2619]: I0117 00:17:34.017914 2619 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:17:34.017993 kubelet[2619]: I0117 00:17:34.017947 2619 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:17:34.017993 kubelet[2619]: I0117 00:17:34.017979 2619 policy_none.go:49] "None policy: Start" Jan 17 00:17:34.017993 kubelet[2619]: I0117 00:17:34.017999 2619 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:17:34.018303 kubelet[2619]: I0117 00:17:34.018021 2619 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:17:34.018303 kubelet[2619]: I0117 00:17:34.018196 2619 state_mem.go:75] "Updated machine memory state" Jan 17 00:17:34.027722 kubelet[2619]: I0117 00:17:34.027149 2619 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:17:34.027722 kubelet[2619]: I0117 00:17:34.027511 2619 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:17:34.027722 kubelet[2619]: I0117 00:17:34.027533 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:17:34.028712 kubelet[2619]: I0117 00:17:34.028694 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:17:34.036250 kubelet[2619]: E0117 00:17:34.036203 2619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:17:34.168314 kubelet[2619]: I0117 00:17:34.167108 2619 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.184268 kubelet[2619]: I0117 00:17:34.184189 2619 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.184898 kubelet[2619]: I0117 00:17:34.184399 2619 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.212030 kubelet[2619]: I0117 00:17:34.211560 2619 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.212268 kubelet[2619]: I0117 00:17:34.212233 2619 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.215666 kubelet[2619]: I0117 00:17:34.213399 2619 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.225264 kubelet[2619]: W0117 00:17:34.224654 2619 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:17:34.227681 kubelet[2619]: W0117 00:17:34.227331 2619 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:17:34.232498 kubelet[2619]: W0117 00:17:34.232311 2619 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:17:34.232498 kubelet[2619]: E0117 00:17:34.232418 2619 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263211 kubelet[2619]: I0117 00:17:34.262788 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263211 kubelet[2619]: I0117 00:17:34.262882 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df084941aa0ec5356fb565f72af4050f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"df084941aa0ec5356fb565f72af4050f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263211 kubelet[2619]: I0117 00:17:34.262922 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263211 kubelet[2619]: I0117 00:17:34.262956 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263639 kubelet[2619]: I0117 00:17:34.262993 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263639 kubelet[2619]: I0117 00:17:34.263029 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce5f651fe769927d9adaa239e2ce7aad-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"ce5f651fe769927d9adaa239e2ce7aad\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263639 kubelet[2619]: I0117 00:17:34.263061 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c63f6aadae8432c94c273e8f3dcc61ed-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"c63f6aadae8432c94c273e8f3dcc61ed\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263639 kubelet[2619]: I0117 00:17:34.263095 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c63f6aadae8432c94c273e8f3dcc61ed-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"c63f6aadae8432c94c273e8f3dcc61ed\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.263908 kubelet[2619]: I0117 00:17:34.263132 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c63f6aadae8432c94c273e8f3dcc61ed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" (UID: \"c63f6aadae8432c94c273e8f3dcc61ed\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.772191 kubelet[2619]: I0117 00:17:34.772121 2619 apiserver.go:52] "Watching apiserver" Jan 17 00:17:34.858316 kubelet[2619]: I0117 00:17:34.858197 2619 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:17:34.964090 kubelet[2619]: I0117 00:17:34.963690 2619 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:34.978914 kubelet[2619]: W0117 00:17:34.977636 2619 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:17:34.979476 kubelet[2619]: E0117 00:17:34.979206 2619 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:17:35.017502 kubelet[2619]: I0117 00:17:35.017370 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" podStartSLOduration=1.017346514 podStartE2EDuration="1.017346514s" podCreationTimestamp="2026-01-17 00:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:35.016897807 +0000 UTC m=+1.438379376" watchObservedRunningTime="2026-01-17 00:17:35.017346514 +0000 UTC m=+1.438828073" Jan 17 00:17:35.017832 kubelet[2619]: I0117 00:17:35.017648 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" podStartSLOduration=1.017616204 podStartE2EDuration="1.017616204s" podCreationTimestamp="2026-01-17 00:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:34.995507613 +0000 UTC m=+1.416989189" watchObservedRunningTime="2026-01-17 00:17:35.017616204 +0000 UTC m=+1.439097774" Jan 17 00:17:37.986139 kubelet[2619]: I0117 00:17:37.985899 2619 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:17:37.988357 kubelet[2619]: I0117 00:17:37.987168 2619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:17:37.991256 containerd[1466]: time="2026-01-17T00:17:37.986826952Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:17:38.798651 kubelet[2619]: I0117 00:17:38.798586 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9-xtables-lock\") pod \"kube-proxy-l472h\" (UID: \"2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9\") " pod="kube-system/kube-proxy-l472h" Jan 17 00:17:38.798878 kubelet[2619]: I0117 00:17:38.798671 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skqbv\" (UniqueName: \"kubernetes.io/projected/2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9-kube-api-access-skqbv\") pod \"kube-proxy-l472h\" (UID: \"2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9\") " pod="kube-system/kube-proxy-l472h" Jan 17 00:17:38.798878 kubelet[2619]: I0117 00:17:38.798710 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9-kube-proxy\") pod \"kube-proxy-l472h\" (UID: \"2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9\") " pod="kube-system/kube-proxy-l472h" Jan 17 00:17:38.798878 kubelet[2619]: I0117 00:17:38.798734 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9-lib-modules\") pod \"kube-proxy-l472h\" (UID: \"2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9\") " pod="kube-system/kube-proxy-l472h" Jan 17 00:17:38.801545 systemd[1]: Created slice kubepods-besteffort-pod2a3bdd06_17d9_49f1_a0fa_1e1bf73f94a9.slice - libcontainer container kubepods-besteffort-pod2a3bdd06_17d9_49f1_a0fa_1e1bf73f94a9.slice. Jan 17 00:17:39.115953 systemd[1]: Created slice kubepods-besteffort-pod8d57a645_32cc_4f47_bb66_d0351817397f.slice - libcontainer container kubepods-besteffort-pod8d57a645_32cc_4f47_bb66_d0351817397f.slice. Jan 17 00:17:39.121795 containerd[1466]: time="2026-01-17T00:17:39.121728944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l472h,Uid:2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:39.177581 containerd[1466]: time="2026-01-17T00:17:39.173376861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:39.177581 containerd[1466]: time="2026-01-17T00:17:39.173491571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:39.177581 containerd[1466]: time="2026-01-17T00:17:39.173511541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:39.177581 containerd[1466]: time="2026-01-17T00:17:39.173666742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:39.201882 kubelet[2619]: I0117 00:17:39.201807 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94w9\" (UniqueName: \"kubernetes.io/projected/8d57a645-32cc-4f47-bb66-d0351817397f-kube-api-access-m94w9\") pod \"tigera-operator-7dcd859c48-fk5fx\" (UID: \"8d57a645-32cc-4f47-bb66-d0351817397f\") " pod="tigera-operator/tigera-operator-7dcd859c48-fk5fx" Jan 17 00:17:39.201882 kubelet[2619]: I0117 00:17:39.201889 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d57a645-32cc-4f47-bb66-d0351817397f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fk5fx\" (UID: \"8d57a645-32cc-4f47-bb66-d0351817397f\") " pod="tigera-operator/tigera-operator-7dcd859c48-fk5fx" Jan 17 00:17:39.211236 systemd[1]: run-containerd-runc-k8s.io-1fea3fba3b3057892fdf877532ef8abf82937cdee7c71f970a2b1fefdaa6e315-runc.Jc7CKp.mount: Deactivated successfully. Jan 17 00:17:39.226890 systemd[1]: Started cri-containerd-1fea3fba3b3057892fdf877532ef8abf82937cdee7c71f970a2b1fefdaa6e315.scope - libcontainer container 1fea3fba3b3057892fdf877532ef8abf82937cdee7c71f970a2b1fefdaa6e315. Jan 17 00:17:39.272966 containerd[1466]: time="2026-01-17T00:17:39.272889781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l472h,Uid:2a3bdd06-17d9-49f1-a0fa-1e1bf73f94a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fea3fba3b3057892fdf877532ef8abf82937cdee7c71f970a2b1fefdaa6e315\"" Jan 17 00:17:39.281523 containerd[1466]: time="2026-01-17T00:17:39.280809481Z" level=info msg="CreateContainer within sandbox \"1fea3fba3b3057892fdf877532ef8abf82937cdee7c71f970a2b1fefdaa6e315\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:17:39.321501 containerd[1466]: time="2026-01-17T00:17:39.318264826Z" level=info msg="CreateContainer within sandbox \"1fea3fba3b3057892fdf877532ef8abf82937cdee7c71f970a2b1fefdaa6e315\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e6e0391a153a2ac045943d08df5e66ba5464581d7aca1be3a54e6c860dc17387\"" Jan 17 00:17:39.321501 containerd[1466]: time="2026-01-17T00:17:39.319951729Z" level=info msg="StartContainer for \"e6e0391a153a2ac045943d08df5e66ba5464581d7aca1be3a54e6c860dc17387\"" Jan 17 00:17:39.365866 systemd[1]: Started cri-containerd-e6e0391a153a2ac045943d08df5e66ba5464581d7aca1be3a54e6c860dc17387.scope - libcontainer container e6e0391a153a2ac045943d08df5e66ba5464581d7aca1be3a54e6c860dc17387. Jan 17 00:17:39.419489 containerd[1466]: time="2026-01-17T00:17:39.419395490Z" level=info msg="StartContainer for \"e6e0391a153a2ac045943d08df5e66ba5464581d7aca1be3a54e6c860dc17387\" returns successfully" Jan 17 00:17:39.427880 containerd[1466]: time="2026-01-17T00:17:39.427773485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fk5fx,Uid:8d57a645-32cc-4f47-bb66-d0351817397f,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:17:39.489087 containerd[1466]: time="2026-01-17T00:17:39.487941660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:39.489087 containerd[1466]: time="2026-01-17T00:17:39.488044626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:39.489087 containerd[1466]: time="2026-01-17T00:17:39.488077252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:39.489087 containerd[1466]: time="2026-01-17T00:17:39.488243118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:39.539040 systemd[1]: Started cri-containerd-6dd0c054aad7851bf95a5a903345095f4d2bb7ecdfa51315444cd95032d73c65.scope - libcontainer container 6dd0c054aad7851bf95a5a903345095f4d2bb7ecdfa51315444cd95032d73c65. Jan 17 00:17:39.644628 containerd[1466]: time="2026-01-17T00:17:39.644546071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fk5fx,Uid:8d57a645-32cc-4f47-bb66-d0351817397f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6dd0c054aad7851bf95a5a903345095f4d2bb7ecdfa51315444cd95032d73c65\"" Jan 17 00:17:39.652807 containerd[1466]: time="2026-01-17T00:17:39.652472727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:17:40.018351 kubelet[2619]: I0117 00:17:40.018066 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l472h" podStartSLOduration=2.01803924 podStartE2EDuration="2.01803924s" podCreationTimestamp="2026-01-17 00:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:40.017682061 +0000 UTC m=+6.439163634" watchObservedRunningTime="2026-01-17 00:17:40.01803924 +0000 UTC m=+6.439520809" Jan 17 00:17:41.152642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208765022.mount: Deactivated successfully. Jan 17 00:17:42.566717 containerd[1466]: time="2026-01-17T00:17:42.566614525Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:42.568370 containerd[1466]: time="2026-01-17T00:17:42.568265627Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:17:42.570290 containerd[1466]: time="2026-01-17T00:17:42.570178648Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:42.574031 containerd[1466]: time="2026-01-17T00:17:42.573930750Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:42.576119 containerd[1466]: time="2026-01-17T00:17:42.575254858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.92257575s" Jan 17 00:17:42.576119 containerd[1466]: time="2026-01-17T00:17:42.575326626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:17:42.580041 containerd[1466]: time="2026-01-17T00:17:42.579968388Z" level=info msg="CreateContainer within sandbox \"6dd0c054aad7851bf95a5a903345095f4d2bb7ecdfa51315444cd95032d73c65\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:17:42.611946 containerd[1466]: time="2026-01-17T00:17:42.611878112Z" level=info msg="CreateContainer within sandbox \"6dd0c054aad7851bf95a5a903345095f4d2bb7ecdfa51315444cd95032d73c65\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0bbdcc70e622b216e5dbc57a84553cf27e1ad19d95d1bb33d61db180919129d1\"" Jan 17 00:17:42.613164 containerd[1466]: time="2026-01-17T00:17:42.613113372Z" level=info msg="StartContainer for \"0bbdcc70e622b216e5dbc57a84553cf27e1ad19d95d1bb33d61db180919129d1\"" Jan 17 00:17:42.680894 systemd[1]: Started cri-containerd-0bbdcc70e622b216e5dbc57a84553cf27e1ad19d95d1bb33d61db180919129d1.scope - libcontainer container 0bbdcc70e622b216e5dbc57a84553cf27e1ad19d95d1bb33d61db180919129d1. Jan 17 00:17:42.730980 containerd[1466]: time="2026-01-17T00:17:42.730867593Z" level=info msg="StartContainer for \"0bbdcc70e622b216e5dbc57a84553cf27e1ad19d95d1bb33d61db180919129d1\" returns successfully" Jan 17 00:17:44.678053 kubelet[2619]: I0117 00:17:44.677057 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fk5fx" podStartSLOduration=2.749036889 podStartE2EDuration="5.677025832s" podCreationTimestamp="2026-01-17 00:17:39 +0000 UTC" firstStartedPulling="2026-01-17 00:17:39.648977229 +0000 UTC m=+6.070458794" lastFinishedPulling="2026-01-17 00:17:42.576966188 +0000 UTC m=+8.998447737" observedRunningTime="2026-01-17 00:17:43.018845592 +0000 UTC m=+9.440327162" watchObservedRunningTime="2026-01-17 00:17:44.677025832 +0000 UTC m=+11.098507401" Jan 17 00:17:48.952302 sudo[1736]: pam_unix(sudo:session): session closed for user root Jan 17 00:17:48.985277 sshd[1733]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:48.992343 systemd[1]: sshd@8-10.128.0.35:22-4.153.228.146:45086.service: Deactivated successfully. Jan 17 00:17:48.997620 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:17:48.997983 systemd[1]: session-9.scope: Consumed 7.544s CPU time, 159.0M memory peak, 0B memory swap peak. Jan 17 00:17:49.001051 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:17:49.003331 systemd-logind[1440]: Removed session 9. Jan 17 00:17:58.041874 systemd[1]: Created slice kubepods-besteffort-pod6ed70318_97d3_4a5b_b72a_44885a392973.slice - libcontainer container kubepods-besteffort-pod6ed70318_97d3_4a5b_b72a_44885a392973.slice. Jan 17 00:17:58.049417 kubelet[2619]: I0117 00:17:58.049158 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed70318-97d3-4a5b-b72a-44885a392973-tigera-ca-bundle\") pod \"calico-typha-947778659-rk4qs\" (UID: \"6ed70318-97d3-4a5b-b72a-44885a392973\") " pod="calico-system/calico-typha-947778659-rk4qs" Jan 17 00:17:58.049417 kubelet[2619]: I0117 00:17:58.049363 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5kw\" (UniqueName: \"kubernetes.io/projected/6ed70318-97d3-4a5b-b72a-44885a392973-kube-api-access-sq5kw\") pod \"calico-typha-947778659-rk4qs\" (UID: \"6ed70318-97d3-4a5b-b72a-44885a392973\") " pod="calico-system/calico-typha-947778659-rk4qs" Jan 17 00:17:58.052872 kubelet[2619]: I0117 00:17:58.051495 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6ed70318-97d3-4a5b-b72a-44885a392973-typha-certs\") pod \"calico-typha-947778659-rk4qs\" (UID: \"6ed70318-97d3-4a5b-b72a-44885a392973\") " pod="calico-system/calico-typha-947778659-rk4qs" Jan 17 00:17:58.243923 systemd[1]: Created slice kubepods-besteffort-podd98cb026_9dde_4bf8_b9c3_fe2b3104e22e.slice - libcontainer container kubepods-besteffort-podd98cb026_9dde_4bf8_b9c3_fe2b3104e22e.slice. Jan 17 00:17:58.254963 kubelet[2619]: I0117 00:17:58.254878 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-xtables-lock\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.254963 kubelet[2619]: I0117 00:17:58.254963 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8qfl\" (UniqueName: \"kubernetes.io/projected/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-kube-api-access-g8qfl\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.254963 kubelet[2619]: I0117 00:17:58.255007 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-tigera-ca-bundle\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255416 kubelet[2619]: I0117 00:17:58.255036 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-node-certs\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255416 kubelet[2619]: I0117 00:17:58.255068 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-policysync\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255416 kubelet[2619]: I0117 00:17:58.255096 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-cni-log-dir\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255416 kubelet[2619]: I0117 00:17:58.255121 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-cni-net-dir\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255416 kubelet[2619]: I0117 00:17:58.255151 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-flexvol-driver-host\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255689 kubelet[2619]: I0117 00:17:58.255177 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-cni-bin-dir\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255689 kubelet[2619]: I0117 00:17:58.255201 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-var-lib-calico\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255689 kubelet[2619]: I0117 00:17:58.255230 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-lib-modules\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.255689 kubelet[2619]: I0117 00:17:58.255262 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d98cb026-9dde-4bf8-b9c3-fe2b3104e22e-var-run-calico\") pod \"calico-node-jl27b\" (UID: \"d98cb026-9dde-4bf8-b9c3-fe2b3104e22e\") " pod="calico-system/calico-node-jl27b" Jan 17 00:17:58.358133 containerd[1466]: time="2026-01-17T00:17:58.357864845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-947778659-rk4qs,Uid:6ed70318-97d3-4a5b-b72a-44885a392973,Namespace:calico-system,Attempt:0,}" Jan 17 00:17:58.380662 kubelet[2619]: E0117 00:17:58.379858 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.380662 kubelet[2619]: W0117 00:17:58.379904 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.380662 kubelet[2619]: E0117 00:17:58.379963 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.399730 kubelet[2619]: E0117 00:17:58.399575 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.399730 kubelet[2619]: W0117 00:17:58.399619 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.399730 kubelet[2619]: E0117 00:17:58.399660 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.477022 containerd[1466]: time="2026-01-17T00:17:58.475852920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:58.477022 containerd[1466]: time="2026-01-17T00:17:58.475960032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:58.477022 containerd[1466]: time="2026-01-17T00:17:58.475987125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:58.478159 containerd[1466]: time="2026-01-17T00:17:58.476795253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:58.517501 kubelet[2619]: E0117 00:17:58.515677 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:17:58.536296 kubelet[2619]: E0117 00:17:58.535408 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.536296 kubelet[2619]: W0117 00:17:58.535486 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.536296 kubelet[2619]: E0117 00:17:58.535529 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.542131 kubelet[2619]: E0117 00:17:58.539750 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.542131 kubelet[2619]: W0117 00:17:58.539790 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.542131 kubelet[2619]: E0117 00:17:58.539821 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.542937 kubelet[2619]: E0117 00:17:58.542899 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.543698 kubelet[2619]: W0117 00:17:58.543205 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.543698 kubelet[2619]: E0117 00:17:58.543256 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.545721 kubelet[2619]: E0117 00:17:58.545684 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.546694 systemd[1]: Started cri-containerd-926991851456bade439bef4965a661b24c17cc0664ac37be6a278b1b12726a86.scope - libcontainer container 926991851456bade439bef4965a661b24c17cc0664ac37be6a278b1b12726a86. Jan 17 00:17:58.548134 kubelet[2619]: W0117 00:17:58.547233 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.548134 kubelet[2619]: E0117 00:17:58.547301 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.548134 kubelet[2619]: E0117 00:17:58.547866 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.548134 kubelet[2619]: W0117 00:17:58.547887 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.548134 kubelet[2619]: E0117 00:17:58.547913 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.550702 kubelet[2619]: E0117 00:17:58.550018 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.550702 kubelet[2619]: W0117 00:17:58.550069 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.550702 kubelet[2619]: E0117 00:17:58.550105 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.552061 kubelet[2619]: E0117 00:17:58.551727 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.552061 kubelet[2619]: W0117 00:17:58.551764 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.552061 kubelet[2619]: E0117 00:17:58.551799 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.553327 kubelet[2619]: E0117 00:17:58.553043 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.553327 kubelet[2619]: W0117 00:17:58.553074 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.553327 kubelet[2619]: E0117 00:17:58.553103 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.553736 kubelet[2619]: E0117 00:17:58.553714 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.553859 kubelet[2619]: W0117 00:17:58.553839 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.554042 kubelet[2619]: E0117 00:17:58.553948 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.554606 kubelet[2619]: E0117 00:17:58.554581 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.554861 kubelet[2619]: W0117 00:17:58.554742 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.554861 kubelet[2619]: E0117 00:17:58.554778 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.555589 kubelet[2619]: E0117 00:17:58.555326 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.555589 kubelet[2619]: W0117 00:17:58.555348 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.555589 kubelet[2619]: E0117 00:17:58.555371 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.555921 kubelet[2619]: E0117 00:17:58.555903 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.556029 kubelet[2619]: W0117 00:17:58.556008 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.556126 kubelet[2619]: E0117 00:17:58.556107 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.557196 kubelet[2619]: E0117 00:17:58.557011 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.557196 kubelet[2619]: W0117 00:17:58.557035 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.557196 kubelet[2619]: E0117 00:17:58.557059 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.559217 kubelet[2619]: E0117 00:17:58.558974 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.559217 kubelet[2619]: W0117 00:17:58.559002 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.559217 kubelet[2619]: E0117 00:17:58.559033 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.560363 kubelet[2619]: E0117 00:17:58.560003 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.560363 kubelet[2619]: W0117 00:17:58.560031 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.560363 kubelet[2619]: E0117 00:17:58.560057 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.561710 kubelet[2619]: E0117 00:17:58.560768 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.561710 kubelet[2619]: W0117 00:17:58.560790 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.561710 kubelet[2619]: E0117 00:17:58.560812 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.562367 containerd[1466]: time="2026-01-17T00:17:58.562303256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jl27b,Uid:d98cb026-9dde-4bf8-b9c3-fe2b3104e22e,Namespace:calico-system,Attempt:0,}" Jan 17 00:17:58.563799 kubelet[2619]: E0117 00:17:58.563763 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.564536 kubelet[2619]: W0117 00:17:58.563992 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.564536 kubelet[2619]: E0117 00:17:58.564050 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.565053 kubelet[2619]: E0117 00:17:58.564829 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.565053 kubelet[2619]: W0117 00:17:58.564853 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.565053 kubelet[2619]: E0117 00:17:58.564887 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.566178 kubelet[2619]: E0117 00:17:58.565993 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.566178 kubelet[2619]: W0117 00:17:58.566052 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.566178 kubelet[2619]: E0117 00:17:58.566081 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.567946 kubelet[2619]: E0117 00:17:58.567618 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.567946 kubelet[2619]: W0117 00:17:58.567647 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.567946 kubelet[2619]: E0117 00:17:58.567679 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.569256 kubelet[2619]: E0117 00:17:58.568899 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.569256 kubelet[2619]: W0117 00:17:58.568938 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.569256 kubelet[2619]: E0117 00:17:58.568968 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.569256 kubelet[2619]: I0117 00:17:58.569048 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05d5b250-556f-4421-995f-92aeade92625-kubelet-dir\") pod \"csi-node-driver-hxqn4\" (UID: \"05d5b250-556f-4421-995f-92aeade92625\") " pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:17:58.571176 kubelet[2619]: E0117 00:17:58.570717 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.571176 kubelet[2619]: W0117 00:17:58.570760 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.571176 kubelet[2619]: E0117 00:17:58.570810 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.572489 kubelet[2619]: E0117 00:17:58.572215 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.572489 kubelet[2619]: W0117 00:17:58.572246 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.572489 kubelet[2619]: E0117 00:17:58.572308 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.574056 kubelet[2619]: E0117 00:17:58.573908 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.574056 kubelet[2619]: W0117 00:17:58.573972 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.574056 kubelet[2619]: E0117 00:17:58.574004 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.574642 kubelet[2619]: I0117 00:17:58.574222 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05d5b250-556f-4421-995f-92aeade92625-socket-dir\") pod \"csi-node-driver-hxqn4\" (UID: \"05d5b250-556f-4421-995f-92aeade92625\") " pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:17:58.576779 kubelet[2619]: E0117 00:17:58.576178 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.576779 kubelet[2619]: W0117 00:17:58.576239 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.576779 kubelet[2619]: E0117 00:17:58.576318 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.577724 kubelet[2619]: I0117 00:17:58.577152 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05d5b250-556f-4421-995f-92aeade92625-registration-dir\") pod \"csi-node-driver-hxqn4\" (UID: \"05d5b250-556f-4421-995f-92aeade92625\") " pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:17:58.579112 kubelet[2619]: E0117 00:17:58.578859 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.579112 kubelet[2619]: W0117 00:17:58.578893 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.579811 kubelet[2619]: E0117 00:17:58.579370 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.580803 kubelet[2619]: E0117 00:17:58.580734 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.580803 kubelet[2619]: W0117 00:17:58.580765 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.582494 kubelet[2619]: E0117 00:17:58.581712 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.582494 kubelet[2619]: I0117 00:17:58.581778 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/05d5b250-556f-4421-995f-92aeade92625-varrun\") pod \"csi-node-driver-hxqn4\" (UID: \"05d5b250-556f-4421-995f-92aeade92625\") " pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:17:58.583939 kubelet[2619]: E0117 00:17:58.583899 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.585498 kubelet[2619]: W0117 00:17:58.585180 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.585498 kubelet[2619]: E0117 00:17:58.585287 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.588750 kubelet[2619]: E0117 00:17:58.585869 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.588750 kubelet[2619]: W0117 00:17:58.585898 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.588750 kubelet[2619]: E0117 00:17:58.585928 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.589657 kubelet[2619]: E0117 00:17:58.589482 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.589873 kubelet[2619]: W0117 00:17:58.589842 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.590018 kubelet[2619]: E0117 00:17:58.589996 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.590185 kubelet[2619]: I0117 00:17:58.590159 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-splcj\" (UniqueName: \"kubernetes.io/projected/05d5b250-556f-4421-995f-92aeade92625-kube-api-access-splcj\") pod \"csi-node-driver-hxqn4\" (UID: \"05d5b250-556f-4421-995f-92aeade92625\") " pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:17:58.591172 kubelet[2619]: E0117 00:17:58.591034 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.591440 kubelet[2619]: W0117 00:17:58.591409 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.591688 kubelet[2619]: E0117 00:17:58.591662 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.592862 kubelet[2619]: E0117 00:17:58.592836 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.593676 kubelet[2619]: W0117 00:17:58.593645 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.594026 kubelet[2619]: E0117 00:17:58.593892 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.595778 kubelet[2619]: E0117 00:17:58.595743 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.595974 kubelet[2619]: W0117 00:17:58.595944 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.596566 kubelet[2619]: E0117 00:17:58.596295 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.597729 kubelet[2619]: E0117 00:17:58.597226 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.597729 kubelet[2619]: W0117 00:17:58.597256 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.597729 kubelet[2619]: E0117 00:17:58.597289 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.602366 kubelet[2619]: E0117 00:17:58.600613 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.602366 kubelet[2619]: W0117 00:17:58.600659 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.602366 kubelet[2619]: E0117 00:17:58.600695 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.679823 containerd[1466]: time="2026-01-17T00:17:58.678167161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:58.679823 containerd[1466]: time="2026-01-17T00:17:58.678318125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:58.679823 containerd[1466]: time="2026-01-17T00:17:58.678342119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:58.679823 containerd[1466]: time="2026-01-17T00:17:58.678570139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:58.703922 kubelet[2619]: E0117 00:17:58.703041 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.703922 kubelet[2619]: W0117 00:17:58.703088 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.703922 kubelet[2619]: E0117 00:17:58.703127 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.706909 kubelet[2619]: E0117 00:17:58.706439 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.706909 kubelet[2619]: W0117 00:17:58.706511 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.710213 kubelet[2619]: E0117 00:17:58.709503 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.712655 kubelet[2619]: E0117 00:17:58.712522 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.712655 kubelet[2619]: W0117 00:17:58.712595 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.714539 kubelet[2619]: E0117 00:17:58.713360 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.715955 kubelet[2619]: E0117 00:17:58.715358 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.715955 kubelet[2619]: W0117 00:17:58.715392 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.715955 kubelet[2619]: E0117 00:17:58.715532 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.718882 kubelet[2619]: E0117 00:17:58.716764 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.718882 kubelet[2619]: W0117 00:17:58.716792 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.718882 kubelet[2619]: E0117 00:17:58.717158 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.719912 kubelet[2619]: E0117 00:17:58.719096 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.719912 kubelet[2619]: W0117 00:17:58.719123 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.719912 kubelet[2619]: E0117 00:17:58.719527 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.719912 kubelet[2619]: E0117 00:17:58.719816 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.719912 kubelet[2619]: W0117 00:17:58.719834 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.721367 kubelet[2619]: E0117 00:17:58.720001 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.721367 kubelet[2619]: E0117 00:17:58.720256 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.721367 kubelet[2619]: W0117 00:17:58.720273 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.721367 kubelet[2619]: E0117 00:17:58.720554 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.721367 kubelet[2619]: E0117 00:17:58.721148 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.721367 kubelet[2619]: W0117 00:17:58.721171 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.723855 kubelet[2619]: E0117 00:17:58.722665 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.723855 kubelet[2619]: E0117 00:17:58.723071 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.723855 kubelet[2619]: W0117 00:17:58.723091 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.723855 kubelet[2619]: E0117 00:17:58.723216 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.727475 kubelet[2619]: E0117 00:17:58.727071 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.727475 kubelet[2619]: W0117 00:17:58.727115 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.728887 kubelet[2619]: E0117 00:17:58.728568 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.728887 kubelet[2619]: E0117 00:17:58.728883 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.730810 kubelet[2619]: W0117 00:17:58.728909 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.730810 kubelet[2619]: E0117 00:17:58.728969 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.732666 kubelet[2619]: E0117 00:17:58.731140 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.732666 kubelet[2619]: W0117 00:17:58.731171 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.732666 kubelet[2619]: E0117 00:17:58.732175 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.732666 kubelet[2619]: E0117 00:17:58.732332 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.732666 kubelet[2619]: W0117 00:17:58.732349 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.735709 kubelet[2619]: E0117 00:17:58.733553 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.735709 kubelet[2619]: E0117 00:17:58.734092 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.735709 kubelet[2619]: W0117 00:17:58.734195 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.735709 kubelet[2619]: E0117 00:17:58.734318 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.735709 kubelet[2619]: E0117 00:17:58.735029 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.735709 kubelet[2619]: W0117 00:17:58.735051 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.735709 kubelet[2619]: E0117 00:17:58.735113 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.736213 kubelet[2619]: E0117 00:17:58.736056 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.736213 kubelet[2619]: W0117 00:17:58.736098 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.736213 kubelet[2619]: E0117 00:17:58.736206 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.737768 kubelet[2619]: E0117 00:17:58.736848 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.737768 kubelet[2619]: W0117 00:17:58.736872 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.737768 kubelet[2619]: E0117 00:17:58.737076 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.737768 kubelet[2619]: E0117 00:17:58.737564 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.737768 kubelet[2619]: W0117 00:17:58.737604 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.738679 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.744052 kubelet[2619]: W0117 00:17:58.738706 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.739912 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.739952 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.740155 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.744052 kubelet[2619]: W0117 00:17:58.740175 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.740610 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.741162 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.744052 kubelet[2619]: W0117 00:17:58.741185 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744052 kubelet[2619]: E0117 00:17:58.741342 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.744718 kubelet[2619]: E0117 00:17:58.741638 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.744718 kubelet[2619]: W0117 00:17:58.741723 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744718 kubelet[2619]: E0117 00:17:58.741835 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.744718 kubelet[2619]: E0117 00:17:58.742439 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.744718 kubelet[2619]: W0117 00:17:58.742546 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744718 kubelet[2619]: E0117 00:17:58.742574 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.744718 kubelet[2619]: E0117 00:17:58.744507 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.744718 kubelet[2619]: W0117 00:17:58.744536 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.744718 kubelet[2619]: E0117 00:17:58.744566 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.750886 systemd[1]: Started cri-containerd-1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0.scope - libcontainer container 1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0. Jan 17 00:17:58.767971 kubelet[2619]: E0117 00:17:58.767907 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:17:58.767971 kubelet[2619]: W0117 00:17:58.767979 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:17:58.768274 kubelet[2619]: E0117 00:17:58.768021 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:17:58.935417 containerd[1466]: time="2026-01-17T00:17:58.933727337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-947778659-rk4qs,Uid:6ed70318-97d3-4a5b-b72a-44885a392973,Namespace:calico-system,Attempt:0,} returns sandbox id \"926991851456bade439bef4965a661b24c17cc0664ac37be6a278b1b12726a86\"" Jan 17 00:17:58.957614 containerd[1466]: time="2026-01-17T00:17:58.955295813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:17:58.975298 containerd[1466]: time="2026-01-17T00:17:58.975189202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jl27b,Uid:d98cb026-9dde-4bf8-b9c3-fe2b3104e22e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\"" Jan 17 00:17:59.919124 kubelet[2619]: E0117 00:17:59.919054 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:17:59.944418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935108424.mount: Deactivated successfully. Jan 17 00:18:01.313553 containerd[1466]: time="2026-01-17T00:18:01.313433536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:01.324104 containerd[1466]: time="2026-01-17T00:18:01.322844029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:18:01.324104 containerd[1466]: time="2026-01-17T00:18:01.323068709Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:01.330050 containerd[1466]: time="2026-01-17T00:18:01.329844015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:01.332051 containerd[1466]: time="2026-01-17T00:18:01.331972782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.376566244s" Jan 17 00:18:01.332535 containerd[1466]: time="2026-01-17T00:18:01.332493605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:18:01.334765 containerd[1466]: time="2026-01-17T00:18:01.334596106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:18:01.367020 containerd[1466]: time="2026-01-17T00:18:01.366952877Z" level=info msg="CreateContainer within sandbox \"926991851456bade439bef4965a661b24c17cc0664ac37be6a278b1b12726a86\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:18:01.408580 containerd[1466]: time="2026-01-17T00:18:01.407768574Z" level=info msg="CreateContainer within sandbox \"926991851456bade439bef4965a661b24c17cc0664ac37be6a278b1b12726a86\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e65d6da3b5083d90bb7c9f6124eea31debc9d1bca7bde40509abfda7a1f9d4eb\"" Jan 17 00:18:01.409198 containerd[1466]: time="2026-01-17T00:18:01.409151364Z" level=info msg="StartContainer for \"e65d6da3b5083d90bb7c9f6124eea31debc9d1bca7bde40509abfda7a1f9d4eb\"" Jan 17 00:18:01.473794 systemd[1]: Started cri-containerd-e65d6da3b5083d90bb7c9f6124eea31debc9d1bca7bde40509abfda7a1f9d4eb.scope - libcontainer container e65d6da3b5083d90bb7c9f6124eea31debc9d1bca7bde40509abfda7a1f9d4eb. Jan 17 00:18:01.561315 containerd[1466]: time="2026-01-17T00:18:01.561231474Z" level=info msg="StartContainer for \"e65d6da3b5083d90bb7c9f6124eea31debc9d1bca7bde40509abfda7a1f9d4eb\" returns successfully" Jan 17 00:18:01.911329 kubelet[2619]: E0117 00:18:01.910833 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:02.101468 kubelet[2619]: E0117 00:18:02.101049 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.101468 kubelet[2619]: W0117 00:18:02.101090 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.101468 kubelet[2619]: E0117 00:18:02.101125 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.102954 kubelet[2619]: E0117 00:18:02.102680 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.102954 kubelet[2619]: W0117 00:18:02.102713 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.102954 kubelet[2619]: E0117 00:18:02.102748 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.104713 kubelet[2619]: E0117 00:18:02.104508 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.104713 kubelet[2619]: W0117 00:18:02.104557 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.104713 kubelet[2619]: E0117 00:18:02.104592 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.107348 kubelet[2619]: E0117 00:18:02.107013 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.107348 kubelet[2619]: W0117 00:18:02.107064 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.107348 kubelet[2619]: E0117 00:18:02.107101 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.108009 kubelet[2619]: E0117 00:18:02.107853 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.108009 kubelet[2619]: W0117 00:18:02.107882 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.108009 kubelet[2619]: E0117 00:18:02.107914 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.109609 kubelet[2619]: E0117 00:18:02.108751 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.109609 kubelet[2619]: W0117 00:18:02.108777 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.109609 kubelet[2619]: E0117 00:18:02.108802 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.110159 kubelet[2619]: E0117 00:18:02.110002 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.110159 kubelet[2619]: W0117 00:18:02.110038 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.110159 kubelet[2619]: E0117 00:18:02.110069 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.110906 kubelet[2619]: E0117 00:18:02.110864 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.110906 kubelet[2619]: W0117 00:18:02.110894 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.112615 kubelet[2619]: E0117 00:18:02.110921 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.112615 kubelet[2619]: E0117 00:18:02.111475 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.112615 kubelet[2619]: W0117 00:18:02.111494 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.112615 kubelet[2619]: E0117 00:18:02.111517 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.112615 kubelet[2619]: E0117 00:18:02.111950 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.112615 kubelet[2619]: W0117 00:18:02.111971 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.112615 kubelet[2619]: E0117 00:18:02.111993 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.113543 kubelet[2619]: E0117 00:18:02.113498 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.113689 kubelet[2619]: W0117 00:18:02.113570 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.113689 kubelet[2619]: E0117 00:18:02.113605 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.114268 kubelet[2619]: E0117 00:18:02.114234 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.114268 kubelet[2619]: W0117 00:18:02.114264 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.114418 kubelet[2619]: E0117 00:18:02.114289 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.116100 kubelet[2619]: E0117 00:18:02.116053 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.116100 kubelet[2619]: W0117 00:18:02.116092 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.116357 kubelet[2619]: E0117 00:18:02.116125 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.118414 kubelet[2619]: E0117 00:18:02.118345 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.118414 kubelet[2619]: W0117 00:18:02.118389 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.118414 kubelet[2619]: E0117 00:18:02.118423 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.119576 kubelet[2619]: E0117 00:18:02.119521 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.119576 kubelet[2619]: W0117 00:18:02.119563 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.119877 kubelet[2619]: E0117 00:18:02.119596 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.154417 kubelet[2619]: E0117 00:18:02.154119 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.154417 kubelet[2619]: W0117 00:18:02.154153 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.154417 kubelet[2619]: E0117 00:18:02.154185 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.158121 kubelet[2619]: E0117 00:18:02.157916 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.158121 kubelet[2619]: W0117 00:18:02.157991 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.159424 kubelet[2619]: E0117 00:18:02.158853 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.159424 kubelet[2619]: E0117 00:18:02.159207 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.159424 kubelet[2619]: W0117 00:18:02.159227 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.159424 kubelet[2619]: E0117 00:18:02.159257 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.161297 kubelet[2619]: E0117 00:18:02.160692 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.161297 kubelet[2619]: W0117 00:18:02.160725 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.161297 kubelet[2619]: E0117 00:18:02.160762 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.164352 kubelet[2619]: E0117 00:18:02.161799 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.164352 kubelet[2619]: W0117 00:18:02.161835 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.164352 kubelet[2619]: E0117 00:18:02.161971 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.168817 kubelet[2619]: E0117 00:18:02.168612 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.168817 kubelet[2619]: W0117 00:18:02.168661 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.169852 kubelet[2619]: E0117 00:18:02.169165 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.172508 kubelet[2619]: E0117 00:18:02.171638 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.172508 kubelet[2619]: W0117 00:18:02.171688 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.172508 kubelet[2619]: E0117 00:18:02.171902 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.174067 kubelet[2619]: E0117 00:18:02.174002 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.174067 kubelet[2619]: W0117 00:18:02.174049 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.174319 kubelet[2619]: E0117 00:18:02.174176 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.179500 kubelet[2619]: E0117 00:18:02.177961 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.179500 kubelet[2619]: W0117 00:18:02.178029 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.179500 kubelet[2619]: E0117 00:18:02.178416 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.179500 kubelet[2619]: E0117 00:18:02.178955 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.179500 kubelet[2619]: W0117 00:18:02.178981 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.179500 kubelet[2619]: E0117 00:18:02.179103 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.181762 kubelet[2619]: E0117 00:18:02.181680 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.181762 kubelet[2619]: W0117 00:18:02.181744 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.182124 kubelet[2619]: E0117 00:18:02.181966 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.183197 kubelet[2619]: E0117 00:18:02.182325 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.183197 kubelet[2619]: W0117 00:18:02.182354 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.183197 kubelet[2619]: E0117 00:18:02.182678 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.183197 kubelet[2619]: E0117 00:18:02.182994 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.183197 kubelet[2619]: W0117 00:18:02.183011 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.183197 kubelet[2619]: E0117 00:18:02.183038 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.185822 kubelet[2619]: E0117 00:18:02.185759 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.185822 kubelet[2619]: W0117 00:18:02.185812 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.186051 kubelet[2619]: E0117 00:18:02.185878 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.186826 kubelet[2619]: E0117 00:18:02.186690 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.186826 kubelet[2619]: W0117 00:18:02.186729 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.186826 kubelet[2619]: E0117 00:18:02.186767 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.189397 kubelet[2619]: E0117 00:18:02.189340 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.189397 kubelet[2619]: W0117 00:18:02.189386 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.189594 kubelet[2619]: E0117 00:18:02.189477 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.197086 kubelet[2619]: E0117 00:18:02.197017 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.197086 kubelet[2619]: W0117 00:18:02.197082 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.197425 kubelet[2619]: E0117 00:18:02.197124 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.198817 kubelet[2619]: E0117 00:18:02.198754 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:18:02.198817 kubelet[2619]: W0117 00:18:02.198798 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:18:02.199084 kubelet[2619]: E0117 00:18:02.198834 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:18:02.511017 containerd[1466]: time="2026-01-17T00:18:02.508260017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:02.512809 containerd[1466]: time="2026-01-17T00:18:02.512716815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:18:02.516670 containerd[1466]: time="2026-01-17T00:18:02.516589144Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:02.526038 containerd[1466]: time="2026-01-17T00:18:02.525436572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:02.526827 containerd[1466]: time="2026-01-17T00:18:02.526766816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.191610198s" Jan 17 00:18:02.527096 containerd[1466]: time="2026-01-17T00:18:02.527048309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:18:02.534410 containerd[1466]: time="2026-01-17T00:18:02.534161246Z" level=info msg="CreateContainer within sandbox \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:18:02.568480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553069796.mount: Deactivated successfully. Jan 17 00:18:02.569734 containerd[1466]: time="2026-01-17T00:18:02.569516797Z" level=info msg="CreateContainer within sandbox \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559\"" Jan 17 00:18:02.574824 containerd[1466]: time="2026-01-17T00:18:02.572167074Z" level=info msg="StartContainer for \"75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559\"" Jan 17 00:18:02.653955 systemd[1]: Started cri-containerd-75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559.scope - libcontainer container 75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559. Jan 17 00:18:02.712745 containerd[1466]: time="2026-01-17T00:18:02.712661447Z" level=info msg="StartContainer for \"75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559\" returns successfully" Jan 17 00:18:02.744992 systemd[1]: cri-containerd-75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559.scope: Deactivated successfully. Jan 17 00:18:02.805072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559-rootfs.mount: Deactivated successfully. Jan 17 00:18:03.136994 kubelet[2619]: I0117 00:18:03.136761 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-947778659-rk4qs" podStartSLOduration=3.7562221239999998 podStartE2EDuration="6.136728595s" podCreationTimestamp="2026-01-17 00:17:57 +0000 UTC" firstStartedPulling="2026-01-17 00:17:58.953746049 +0000 UTC m=+25.375227603" lastFinishedPulling="2026-01-17 00:18:01.334252512 +0000 UTC m=+27.755734074" observedRunningTime="2026-01-17 00:18:02.279881866 +0000 UTC m=+28.701363435" watchObservedRunningTime="2026-01-17 00:18:03.136728595 +0000 UTC m=+29.558210221" Jan 17 00:18:03.478723 containerd[1466]: time="2026-01-17T00:18:03.478534708Z" level=info msg="shim disconnected" id=75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559 namespace=k8s.io Jan 17 00:18:03.478723 containerd[1466]: time="2026-01-17T00:18:03.478643106Z" level=warning msg="cleaning up after shim disconnected" id=75af45a23f87a58113c90c3d2d095de7364043d91aa1c46fa6dc7bc16a4b5559 namespace=k8s.io Jan 17 00:18:03.478723 containerd[1466]: time="2026-01-17T00:18:03.478659748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:18:03.916535 kubelet[2619]: E0117 00:18:03.916317 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:04.111949 containerd[1466]: time="2026-01-17T00:18:04.111872376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:18:05.915759 kubelet[2619]: E0117 00:18:05.915692 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:07.836342 containerd[1466]: time="2026-01-17T00:18:07.836240399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:07.838442 containerd[1466]: time="2026-01-17T00:18:07.838100980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:18:07.841580 containerd[1466]: time="2026-01-17T00:18:07.840319484Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:07.845219 containerd[1466]: time="2026-01-17T00:18:07.845157888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:07.847229 containerd[1466]: time="2026-01-17T00:18:07.847154739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.735180867s" Jan 17 00:18:07.847680 containerd[1466]: time="2026-01-17T00:18:07.847632898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:18:07.852054 containerd[1466]: time="2026-01-17T00:18:07.851950609Z" level=info msg="CreateContainer within sandbox \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:18:07.880130 containerd[1466]: time="2026-01-17T00:18:07.880049994Z" level=info msg="CreateContainer within sandbox \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893\"" Jan 17 00:18:07.882223 containerd[1466]: time="2026-01-17T00:18:07.881332030Z" level=info msg="StartContainer for \"f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893\"" Jan 17 00:18:07.924964 kubelet[2619]: E0117 00:18:07.921728 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:07.946772 systemd[1]: Started cri-containerd-f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893.scope - libcontainer container f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893. Jan 17 00:18:08.013281 containerd[1466]: time="2026-01-17T00:18:08.013192423Z" level=info msg="StartContainer for \"f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893\" returns successfully" Jan 17 00:18:09.363055 containerd[1466]: time="2026-01-17T00:18:09.362749754Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:18:09.369850 systemd[1]: cri-containerd-f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893.scope: Deactivated successfully. Jan 17 00:18:09.379531 kubelet[2619]: I0117 00:18:09.377230 2619 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:18:09.440684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893-rootfs.mount: Deactivated successfully. Jan 17 00:18:09.469156 systemd[1]: Created slice kubepods-burstable-pod7a458d0c_d067_4f59_ad18_82fe02f35050.slice - libcontainer container kubepods-burstable-pod7a458d0c_d067_4f59_ad18_82fe02f35050.slice. Jan 17 00:18:09.516221 systemd[1]: Created slice kubepods-burstable-pod0b123c30_c4a1_486c_a4a2_f586dab5927b.slice - libcontainer container kubepods-burstable-pod0b123c30_c4a1_486c_a4a2_f586dab5927b.slice. Jan 17 00:18:09.552557 kubelet[2619]: I0117 00:18:09.548994 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5dx\" (UniqueName: \"kubernetes.io/projected/0b123c30-c4a1-486c-a4a2-f586dab5927b-kube-api-access-5t5dx\") pod \"coredns-668d6bf9bc-kt8n7\" (UID: \"0b123c30-c4a1-486c-a4a2-f586dab5927b\") " pod="kube-system/coredns-668d6bf9bc-kt8n7" Jan 17 00:18:09.552557 kubelet[2619]: I0117 00:18:09.549197 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8764k\" (UniqueName: \"kubernetes.io/projected/7a458d0c-d067-4f59-ad18-82fe02f35050-kube-api-access-8764k\") pod \"coredns-668d6bf9bc-wmqr2\" (UID: \"7a458d0c-d067-4f59-ad18-82fe02f35050\") " pod="kube-system/coredns-668d6bf9bc-wmqr2" Jan 17 00:18:09.552557 kubelet[2619]: I0117 00:18:09.549248 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a458d0c-d067-4f59-ad18-82fe02f35050-config-volume\") pod \"coredns-668d6bf9bc-wmqr2\" (UID: \"7a458d0c-d067-4f59-ad18-82fe02f35050\") " pod="kube-system/coredns-668d6bf9bc-wmqr2" Jan 17 00:18:09.552557 kubelet[2619]: I0117 00:18:09.549279 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b123c30-c4a1-486c-a4a2-f586dab5927b-config-volume\") pod \"coredns-668d6bf9bc-kt8n7\" (UID: \"0b123c30-c4a1-486c-a4a2-f586dab5927b\") " pod="kube-system/coredns-668d6bf9bc-kt8n7" Jan 17 00:18:09.553981 systemd[1]: Created slice kubepods-besteffort-podc361c6e8_c3e0_4b7a_8e22_dd558d5fdb2e.slice - libcontainer container kubepods-besteffort-podc361c6e8_c3e0_4b7a_8e22_dd558d5fdb2e.slice. Jan 17 00:18:09.583392 systemd[1]: Created slice kubepods-besteffort-podfe061b2a_805b_43bd_8451_203c834c880a.slice - libcontainer container kubepods-besteffort-podfe061b2a_805b_43bd_8451_203c834c880a.slice. Jan 17 00:18:09.611409 systemd[1]: Created slice kubepods-besteffort-pod0a073190_14cb_45b8_a9bf_4fd4665cfd04.slice - libcontainer container kubepods-besteffort-pod0a073190_14cb_45b8_a9bf_4fd4665cfd04.slice. Jan 17 00:18:09.625332 systemd[1]: Created slice kubepods-besteffort-pod3475ce39_a584_4708_980c_68f68b25eff1.slice - libcontainer container kubepods-besteffort-pod3475ce39_a584_4708_980c_68f68b25eff1.slice. Jan 17 00:18:09.674128 kubelet[2619]: I0117 00:18:09.651749 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3475ce39-a584-4708-980c-68f68b25eff1-calico-apiserver-certs\") pod \"calico-apiserver-6ffcc6648d-2jknj\" (UID: \"3475ce39-a584-4708-980c-68f68b25eff1\") " pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" Jan 17 00:18:09.674128 kubelet[2619]: I0117 00:18:09.651819 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fe061b2a-805b-43bd-8451-203c834c880a-goldmane-key-pair\") pod \"goldmane-666569f655-sdfcr\" (UID: \"fe061b2a-805b-43bd-8451-203c834c880a\") " pod="calico-system/goldmane-666569f655-sdfcr" Jan 17 00:18:09.674128 kubelet[2619]: I0117 00:18:09.651873 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshjg\" (UniqueName: \"kubernetes.io/projected/0a073190-14cb-45b8-a9bf-4fd4665cfd04-kube-api-access-gshjg\") pod \"calico-apiserver-6ffcc6648d-969cl\" (UID: \"0a073190-14cb-45b8-a9bf-4fd4665cfd04\") " pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" Jan 17 00:18:09.674128 kubelet[2619]: I0117 00:18:09.651908 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e-tigera-ca-bundle\") pod \"calico-kube-controllers-d4d576bf5-8czh9\" (UID: \"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e\") " pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" Jan 17 00:18:09.674128 kubelet[2619]: I0117 00:18:09.651939 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm59t\" (UniqueName: \"kubernetes.io/projected/fe061b2a-805b-43bd-8451-203c834c880a-kube-api-access-tm59t\") pod \"goldmane-666569f655-sdfcr\" (UID: \"fe061b2a-805b-43bd-8451-203c834c880a\") " pod="calico-system/goldmane-666569f655-sdfcr" Jan 17 00:18:09.645300 systemd[1]: Created slice kubepods-besteffort-pod084190ec_90e5_434f_8cbe_774a3d390671.slice - libcontainer container kubepods-besteffort-pod084190ec_90e5_434f_8cbe_774a3d390671.slice. Jan 17 00:18:09.675297 kubelet[2619]: I0117 00:18:09.651977 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe061b2a-805b-43bd-8451-203c834c880a-goldmane-ca-bundle\") pod \"goldmane-666569f655-sdfcr\" (UID: \"fe061b2a-805b-43bd-8451-203c834c880a\") " pod="calico-system/goldmane-666569f655-sdfcr" Jan 17 00:18:09.675297 kubelet[2619]: I0117 00:18:09.652236 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/084190ec-90e5-434f-8cbe-774a3d390671-whisker-backend-key-pair\") pod \"whisker-5dfbff4764-c2gj6\" (UID: \"084190ec-90e5-434f-8cbe-774a3d390671\") " pod="calico-system/whisker-5dfbff4764-c2gj6" Jan 17 00:18:09.675297 kubelet[2619]: I0117 00:18:09.652287 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084190ec-90e5-434f-8cbe-774a3d390671-whisker-ca-bundle\") pod \"whisker-5dfbff4764-c2gj6\" (UID: \"084190ec-90e5-434f-8cbe-774a3d390671\") " pod="calico-system/whisker-5dfbff4764-c2gj6" Jan 17 00:18:09.675297 kubelet[2619]: I0117 00:18:09.652371 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-828hq\" (UniqueName: \"kubernetes.io/projected/3475ce39-a584-4708-980c-68f68b25eff1-kube-api-access-828hq\") pod \"calico-apiserver-6ffcc6648d-2jknj\" (UID: \"3475ce39-a584-4708-980c-68f68b25eff1\") " pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" Jan 17 00:18:09.675297 kubelet[2619]: I0117 00:18:09.652408 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe061b2a-805b-43bd-8451-203c834c880a-config\") pod \"goldmane-666569f655-sdfcr\" (UID: \"fe061b2a-805b-43bd-8451-203c834c880a\") " pod="calico-system/goldmane-666569f655-sdfcr" Jan 17 00:18:09.675762 kubelet[2619]: I0117 00:18:09.652443 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a073190-14cb-45b8-a9bf-4fd4665cfd04-calico-apiserver-certs\") pod \"calico-apiserver-6ffcc6648d-969cl\" (UID: \"0a073190-14cb-45b8-a9bf-4fd4665cfd04\") " pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" Jan 17 00:18:09.675762 kubelet[2619]: I0117 00:18:09.652504 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjqph\" (UniqueName: \"kubernetes.io/projected/084190ec-90e5-434f-8cbe-774a3d390671-kube-api-access-zjqph\") pod \"whisker-5dfbff4764-c2gj6\" (UID: \"084190ec-90e5-434f-8cbe-774a3d390671\") " pod="calico-system/whisker-5dfbff4764-c2gj6" Jan 17 00:18:09.675762 kubelet[2619]: I0117 00:18:09.652578 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqwg\" (UniqueName: \"kubernetes.io/projected/c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e-kube-api-access-flqwg\") pod \"calico-kube-controllers-d4d576bf5-8czh9\" (UID: \"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e\") " pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" Jan 17 00:18:09.822535 containerd[1466]: time="2026-01-17T00:18:09.821930152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wmqr2,Uid:7a458d0c-d067-4f59-ad18-82fe02f35050,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:09.845040 containerd[1466]: time="2026-01-17T00:18:09.844677090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kt8n7,Uid:0b123c30-c4a1-486c-a4a2-f586dab5927b,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:09.877209 containerd[1466]: time="2026-01-17T00:18:09.876438244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d4d576bf5-8czh9,Uid:c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e,Namespace:calico-system,Attempt:0,}" Jan 17 00:18:09.893084 containerd[1466]: time="2026-01-17T00:18:09.893012675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sdfcr,Uid:fe061b2a-805b-43bd-8451-203c834c880a,Namespace:calico-system,Attempt:0,}" Jan 17 00:18:09.925479 systemd[1]: Created slice kubepods-besteffort-pod05d5b250_556f_4421_995f_92aeade92625.slice - libcontainer container kubepods-besteffort-pod05d5b250_556f_4421_995f_92aeade92625.slice. Jan 17 00:18:09.930671 containerd[1466]: time="2026-01-17T00:18:09.930583264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxqn4,Uid:05d5b250-556f-4421-995f-92aeade92625,Namespace:calico-system,Attempt:0,}" Jan 17 00:18:09.976651 containerd[1466]: time="2026-01-17T00:18:09.976582810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-969cl,Uid:0a073190-14cb-45b8-a9bf-4fd4665cfd04,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:18:09.987682 containerd[1466]: time="2026-01-17T00:18:09.987617056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-2jknj,Uid:3475ce39-a584-4708-980c-68f68b25eff1,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:18:09.995735 containerd[1466]: time="2026-01-17T00:18:09.995559117Z" level=info msg="shim disconnected" id=f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893 namespace=k8s.io Jan 17 00:18:09.995735 containerd[1466]: time="2026-01-17T00:18:09.995637949Z" level=warning msg="cleaning up after shim disconnected" id=f22683a21ef57f3c7ac8b151b2ca22197dedab111e93714e20bcfeed06aa1893 namespace=k8s.io Jan 17 00:18:09.996171 containerd[1466]: time="2026-01-17T00:18:09.995803861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:18:10.002691 containerd[1466]: time="2026-01-17T00:18:10.002002583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dfbff4764-c2gj6,Uid:084190ec-90e5-434f-8cbe-774a3d390671,Namespace:calico-system,Attempt:0,}" Jan 17 00:18:10.203443 containerd[1466]: time="2026-01-17T00:18:10.203236955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:18:10.661575 containerd[1466]: time="2026-01-17T00:18:10.658394240Z" level=error msg="Failed to destroy network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.669932 containerd[1466]: time="2026-01-17T00:18:10.665018381Z" level=error msg="encountered an error cleaning up failed sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.669932 containerd[1466]: time="2026-01-17T00:18:10.665127427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d4d576bf5-8czh9,Uid:c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.670165 kubelet[2619]: E0117 00:18:10.665652 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.670165 kubelet[2619]: E0117 00:18:10.665798 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" Jan 17 00:18:10.670165 kubelet[2619]: E0117 00:18:10.665863 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" Jan 17 00:18:10.673931 kubelet[2619]: E0117 00:18:10.666419 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d4d576bf5-8czh9_calico-system(c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d4d576bf5-8czh9_calico-system(c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:18:10.676408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8-shm.mount: Deactivated successfully. Jan 17 00:18:10.713295 containerd[1466]: time="2026-01-17T00:18:10.712937782Z" level=error msg="Failed to destroy network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.713295 containerd[1466]: time="2026-01-17T00:18:10.712937792Z" level=error msg="Failed to destroy network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.722518 containerd[1466]: time="2026-01-17T00:18:10.716679095Z" level=error msg="encountered an error cleaning up failed sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.722518 containerd[1466]: time="2026-01-17T00:18:10.716811635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxqn4,Uid:05d5b250-556f-4421-995f-92aeade92625,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.727430 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b-shm.mount: Deactivated successfully. Jan 17 00:18:10.728012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400-shm.mount: Deactivated successfully. Jan 17 00:18:10.728818 kubelet[2619]: E0117 00:18:10.728729 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.729008 kubelet[2619]: E0117 00:18:10.728918 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:18:10.729008 kubelet[2619]: E0117 00:18:10.728980 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hxqn4" Jan 17 00:18:10.729139 kubelet[2619]: E0117 00:18:10.729083 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:10.734873 containerd[1466]: time="2026-01-17T00:18:10.734760221Z" level=error msg="encountered an error cleaning up failed sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.736672 containerd[1466]: time="2026-01-17T00:18:10.736405611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-969cl,Uid:0a073190-14cb-45b8-a9bf-4fd4665cfd04,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.739581 kubelet[2619]: E0117 00:18:10.738002 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.739808 kubelet[2619]: E0117 00:18:10.739631 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" Jan 17 00:18:10.739808 kubelet[2619]: E0117 00:18:10.739672 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" Jan 17 00:18:10.739808 kubelet[2619]: E0117 00:18:10.739748 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ffcc6648d-969cl_calico-apiserver(0a073190-14cb-45b8-a9bf-4fd4665cfd04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ffcc6648d-969cl_calico-apiserver(0a073190-14cb-45b8-a9bf-4fd4665cfd04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:18:10.776499 containerd[1466]: time="2026-01-17T00:18:10.774335906Z" level=error msg="Failed to destroy network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.776499 containerd[1466]: time="2026-01-17T00:18:10.775920743Z" level=error msg="encountered an error cleaning up failed sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.776499 containerd[1466]: time="2026-01-17T00:18:10.776024485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wmqr2,Uid:7a458d0c-d067-4f59-ad18-82fe02f35050,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.776800 kubelet[2619]: E0117 00:18:10.776416 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.776800 kubelet[2619]: E0117 00:18:10.776529 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wmqr2" Jan 17 00:18:10.776800 kubelet[2619]: E0117 00:18:10.776558 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wmqr2" Jan 17 00:18:10.777008 kubelet[2619]: E0117 00:18:10.776622 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wmqr2_kube-system(7a458d0c-d067-4f59-ad18-82fe02f35050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wmqr2_kube-system(7a458d0c-d067-4f59-ad18-82fe02f35050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wmqr2" podUID="7a458d0c-d067-4f59-ad18-82fe02f35050" Jan 17 00:18:10.787195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3-shm.mount: Deactivated successfully. Jan 17 00:18:10.793105 containerd[1466]: time="2026-01-17T00:18:10.793028303Z" level=error msg="Failed to destroy network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.795239 containerd[1466]: time="2026-01-17T00:18:10.794290466Z" level=error msg="encountered an error cleaning up failed sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.795239 containerd[1466]: time="2026-01-17T00:18:10.794530718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kt8n7,Uid:0b123c30-c4a1-486c-a4a2-f586dab5927b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.795414 kubelet[2619]: E0117 00:18:10.795129 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.795414 kubelet[2619]: E0117 00:18:10.795223 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kt8n7" Jan 17 00:18:10.795414 kubelet[2619]: E0117 00:18:10.795256 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kt8n7" Jan 17 00:18:10.796834 kubelet[2619]: E0117 00:18:10.795319 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kt8n7_kube-system(0b123c30-c4a1-486c-a4a2-f586dab5927b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kt8n7_kube-system(0b123c30-c4a1-486c-a4a2-f586dab5927b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kt8n7" podUID="0b123c30-c4a1-486c-a4a2-f586dab5927b" Jan 17 00:18:10.804760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5-shm.mount: Deactivated successfully. Jan 17 00:18:10.824012 containerd[1466]: time="2026-01-17T00:18:10.823754784Z" level=error msg="Failed to destroy network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.825233 containerd[1466]: time="2026-01-17T00:18:10.825036126Z" level=error msg="encountered an error cleaning up failed sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.825614 containerd[1466]: time="2026-01-17T00:18:10.825466769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-2jknj,Uid:3475ce39-a584-4708-980c-68f68b25eff1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.826104 kubelet[2619]: E0117 00:18:10.826043 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.826330 kubelet[2619]: E0117 00:18:10.826144 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" Jan 17 00:18:10.826330 kubelet[2619]: E0117 00:18:10.826189 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" Jan 17 00:18:10.826330 kubelet[2619]: E0117 00:18:10.826263 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ffcc6648d-2jknj_calico-apiserver(3475ce39-a584-4708-980c-68f68b25eff1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ffcc6648d-2jknj_calico-apiserver(3475ce39-a584-4708-980c-68f68b25eff1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:10.849965 containerd[1466]: time="2026-01-17T00:18:10.849879926Z" level=error msg="Failed to destroy network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.850408 containerd[1466]: time="2026-01-17T00:18:10.850353402Z" level=error msg="encountered an error cleaning up failed sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.850588 containerd[1466]: time="2026-01-17T00:18:10.850506537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sdfcr,Uid:fe061b2a-805b-43bd-8451-203c834c880a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.850951 kubelet[2619]: E0117 00:18:10.850820 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.850951 kubelet[2619]: E0117 00:18:10.850907 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-sdfcr" Jan 17 00:18:10.851246 kubelet[2619]: E0117 00:18:10.850946 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-sdfcr" Jan 17 00:18:10.851246 kubelet[2619]: E0117 00:18:10.851035 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-sdfcr_calico-system(fe061b2a-805b-43bd-8451-203c834c880a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-sdfcr_calico-system(fe061b2a-805b-43bd-8451-203c834c880a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:10.857077 containerd[1466]: time="2026-01-17T00:18:10.856898127Z" level=error msg="Failed to destroy network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.858431 containerd[1466]: time="2026-01-17T00:18:10.858341788Z" level=error msg="encountered an error cleaning up failed sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.859529 containerd[1466]: time="2026-01-17T00:18:10.858523305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dfbff4764-c2gj6,Uid:084190ec-90e5-434f-8cbe-774a3d390671,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.859761 kubelet[2619]: E0117 00:18:10.858984 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:10.859761 kubelet[2619]: E0117 00:18:10.859075 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dfbff4764-c2gj6" Jan 17 00:18:10.859761 kubelet[2619]: E0117 00:18:10.859110 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dfbff4764-c2gj6" Jan 17 00:18:10.859970 kubelet[2619]: E0117 00:18:10.859230 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dfbff4764-c2gj6_calico-system(084190ec-90e5-434f-8cbe-774a3d390671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dfbff4764-c2gj6_calico-system(084190ec-90e5-434f-8cbe-774a3d390671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dfbff4764-c2gj6" podUID="084190ec-90e5-434f-8cbe-774a3d390671" Jan 17 00:18:11.177475 kubelet[2619]: I0117 00:18:11.177386 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:11.185028 containerd[1466]: time="2026-01-17T00:18:11.184942951Z" level=info msg="StopPodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\"" Jan 17 00:18:11.186142 containerd[1466]: time="2026-01-17T00:18:11.185642115Z" level=info msg="Ensure that sandbox 8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b in task-service has been cleanup successfully" Jan 17 00:18:11.196589 kubelet[2619]: I0117 00:18:11.195686 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:11.203689 containerd[1466]: time="2026-01-17T00:18:11.202849076Z" level=info msg="StopPodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\"" Jan 17 00:18:11.207190 containerd[1466]: time="2026-01-17T00:18:11.206760704Z" level=info msg="Ensure that sandbox 0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3 in task-service has been cleanup successfully" Jan 17 00:18:11.215605 kubelet[2619]: I0117 00:18:11.215396 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:11.225555 containerd[1466]: time="2026-01-17T00:18:11.225290182Z" level=info msg="StopPodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\"" Jan 17 00:18:11.226196 containerd[1466]: time="2026-01-17T00:18:11.225992233Z" level=info msg="Ensure that sandbox 87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400 in task-service has been cleanup successfully" Jan 17 00:18:11.238155 kubelet[2619]: I0117 00:18:11.237936 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:11.246791 containerd[1466]: time="2026-01-17T00:18:11.246262445Z" level=info msg="StopPodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\"" Jan 17 00:18:11.248510 containerd[1466]: time="2026-01-17T00:18:11.248053159Z" level=info msg="Ensure that sandbox d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189 in task-service has been cleanup successfully" Jan 17 00:18:11.248724 kubelet[2619]: I0117 00:18:11.248596 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:11.253922 containerd[1466]: time="2026-01-17T00:18:11.253619106Z" level=info msg="StopPodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\"" Jan 17 00:18:11.254949 containerd[1466]: time="2026-01-17T00:18:11.254820366Z" level=info msg="Ensure that sandbox 0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a in task-service has been cleanup successfully" Jan 17 00:18:11.275898 kubelet[2619]: I0117 00:18:11.275836 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:11.280255 containerd[1466]: time="2026-01-17T00:18:11.279431577Z" level=info msg="StopPodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\"" Jan 17 00:18:11.280255 containerd[1466]: time="2026-01-17T00:18:11.279765041Z" level=info msg="Ensure that sandbox ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b in task-service has been cleanup successfully" Jan 17 00:18:11.329608 kubelet[2619]: I0117 00:18:11.329554 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:11.335263 containerd[1466]: time="2026-01-17T00:18:11.335180654Z" level=info msg="StopPodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\"" Jan 17 00:18:11.336279 containerd[1466]: time="2026-01-17T00:18:11.335748620Z" level=info msg="Ensure that sandbox cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5 in task-service has been cleanup successfully" Jan 17 00:18:11.360769 kubelet[2619]: I0117 00:18:11.360714 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:11.367226 containerd[1466]: time="2026-01-17T00:18:11.367158714Z" level=info msg="StopPodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\"" Jan 17 00:18:11.373954 containerd[1466]: time="2026-01-17T00:18:11.373850692Z" level=info msg="Ensure that sandbox e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8 in task-service has been cleanup successfully" Jan 17 00:18:11.441779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b-shm.mount: Deactivated successfully. Jan 17 00:18:11.442023 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a-shm.mount: Deactivated successfully. Jan 17 00:18:11.444944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189-shm.mount: Deactivated successfully. Jan 17 00:18:11.500632 containerd[1466]: time="2026-01-17T00:18:11.500391703Z" level=error msg="StopPodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" failed" error="failed to destroy network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.500897 kubelet[2619]: E0117 00:18:11.500804 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:11.501003 kubelet[2619]: E0117 00:18:11.500904 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b"} Jan 17 00:18:11.501063 kubelet[2619]: E0117 00:18:11.501009 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"084190ec-90e5-434f-8cbe-774a3d390671\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.501184 kubelet[2619]: E0117 00:18:11.501050 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"084190ec-90e5-434f-8cbe-774a3d390671\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dfbff4764-c2gj6" podUID="084190ec-90e5-434f-8cbe-774a3d390671" Jan 17 00:18:11.505361 containerd[1466]: time="2026-01-17T00:18:11.505178701Z" level=error msg="StopPodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" failed" error="failed to destroy network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.506420 kubelet[2619]: E0117 00:18:11.505657 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:11.506420 kubelet[2619]: E0117 00:18:11.505818 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400"} Jan 17 00:18:11.506420 kubelet[2619]: E0117 00:18:11.505907 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05d5b250-556f-4421-995f-92aeade92625\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.506420 kubelet[2619]: E0117 00:18:11.505950 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05d5b250-556f-4421-995f-92aeade92625\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:11.509091 containerd[1466]: time="2026-01-17T00:18:11.508999598Z" level=error msg="StopPodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" failed" error="failed to destroy network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.509649 kubelet[2619]: E0117 00:18:11.509423 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:11.509813 kubelet[2619]: E0117 00:18:11.509717 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3"} Jan 17 00:18:11.510007 kubelet[2619]: E0117 00:18:11.509816 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a458d0c-d067-4f59-ad18-82fe02f35050\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.510209 kubelet[2619]: E0117 00:18:11.510086 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a458d0c-d067-4f59-ad18-82fe02f35050\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wmqr2" podUID="7a458d0c-d067-4f59-ad18-82fe02f35050" Jan 17 00:18:11.569098 containerd[1466]: time="2026-01-17T00:18:11.568907129Z" level=error msg="StopPodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" failed" error="failed to destroy network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.569352 kubelet[2619]: E0117 00:18:11.569266 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:11.569352 kubelet[2619]: E0117 00:18:11.569335 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b"} Jan 17 00:18:11.569531 kubelet[2619]: E0117 00:18:11.569400 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a073190-14cb-45b8-a9bf-4fd4665cfd04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.569531 kubelet[2619]: E0117 00:18:11.569439 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a073190-14cb-45b8-a9bf-4fd4665cfd04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:18:11.600506 containerd[1466]: time="2026-01-17T00:18:11.600010444Z" level=error msg="StopPodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" failed" error="failed to destroy network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.601526 kubelet[2619]: E0117 00:18:11.601091 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:11.601526 kubelet[2619]: E0117 00:18:11.601181 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189"} Jan 17 00:18:11.601526 kubelet[2619]: E0117 00:18:11.601242 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe061b2a-805b-43bd-8451-203c834c880a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.601526 kubelet[2619]: E0117 00:18:11.601282 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe061b2a-805b-43bd-8451-203c834c880a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:11.605407 containerd[1466]: time="2026-01-17T00:18:11.605329829Z" level=error msg="StopPodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" failed" error="failed to destroy network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.606098 kubelet[2619]: E0117 00:18:11.606028 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:11.606247 kubelet[2619]: E0117 00:18:11.606102 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8"} Jan 17 00:18:11.606247 kubelet[2619]: E0117 00:18:11.606165 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.606247 kubelet[2619]: E0117 00:18:11.606201 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:18:11.623622 containerd[1466]: time="2026-01-17T00:18:11.623514276Z" level=error msg="StopPodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" failed" error="failed to destroy network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.624820 kubelet[2619]: E0117 00:18:11.624714 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:11.625039 kubelet[2619]: E0117 00:18:11.624844 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a"} Jan 17 00:18:11.625039 kubelet[2619]: E0117 00:18:11.624897 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3475ce39-a584-4708-980c-68f68b25eff1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.625039 kubelet[2619]: E0117 00:18:11.624934 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3475ce39-a584-4708-980c-68f68b25eff1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:11.633209 containerd[1466]: time="2026-01-17T00:18:11.633078799Z" level=error msg="StopPodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" failed" error="failed to destroy network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:18:11.634599 kubelet[2619]: E0117 00:18:11.633758 2619 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:11.634599 kubelet[2619]: E0117 00:18:11.633863 2619 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5"} Jan 17 00:18:11.634599 kubelet[2619]: E0117 00:18:11.633928 2619 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b123c30-c4a1-486c-a4a2-f586dab5927b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:18:11.634599 kubelet[2619]: E0117 00:18:11.633971 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b123c30-c4a1-486c-a4a2-f586dab5927b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kt8n7" podUID="0b123c30-c4a1-486c-a4a2-f586dab5927b" Jan 17 00:18:18.632742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922186423.mount: Deactivated successfully. Jan 17 00:18:18.678469 containerd[1466]: time="2026-01-17T00:18:18.678366254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:18.680973 containerd[1466]: time="2026-01-17T00:18:18.680497681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:18:18.684497 containerd[1466]: time="2026-01-17T00:18:18.683269495Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:18.687172 containerd[1466]: time="2026-01-17T00:18:18.687099623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:18.688154 containerd[1466]: time="2026-01-17T00:18:18.688081158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.479923066s" Jan 17 00:18:18.688318 containerd[1466]: time="2026-01-17T00:18:18.688161539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:18:18.723006 containerd[1466]: time="2026-01-17T00:18:18.722909935Z" level=info msg="CreateContainer within sandbox \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:18:18.758864 containerd[1466]: time="2026-01-17T00:18:18.758728496Z" level=info msg="CreateContainer within sandbox \"1c8fe432d344c877c23e8fcad53815f23800af96a93b0f39dd2bffe39ecb43c0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dba7f3859a25c596918269d4509723d11371e753f35eb66ef9603448e2f97d6f\"" Jan 17 00:18:18.760882 containerd[1466]: time="2026-01-17T00:18:18.760817865Z" level=info msg="StartContainer for \"dba7f3859a25c596918269d4509723d11371e753f35eb66ef9603448e2f97d6f\"" Jan 17 00:18:18.822859 systemd[1]: Started cri-containerd-dba7f3859a25c596918269d4509723d11371e753f35eb66ef9603448e2f97d6f.scope - libcontainer container dba7f3859a25c596918269d4509723d11371e753f35eb66ef9603448e2f97d6f. Jan 17 00:18:18.888645 containerd[1466]: time="2026-01-17T00:18:18.887126761Z" level=info msg="StartContainer for \"dba7f3859a25c596918269d4509723d11371e753f35eb66ef9603448e2f97d6f\" returns successfully" Jan 17 00:18:19.047716 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:18:19.048795 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:18:19.272697 containerd[1466]: time="2026-01-17T00:18:19.272181156Z" level=info msg="StopPodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\"" Jan 17 00:18:19.467777 kubelet[2619]: I0117 00:18:19.467664 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jl27b" podStartSLOduration=1.757475011 podStartE2EDuration="21.467626194s" podCreationTimestamp="2026-01-17 00:17:58 +0000 UTC" firstStartedPulling="2026-01-17 00:17:58.979496483 +0000 UTC m=+25.400978056" lastFinishedPulling="2026-01-17 00:18:18.689647675 +0000 UTC m=+45.111129239" observedRunningTime="2026-01-17 00:18:19.463967122 +0000 UTC m=+45.885448691" watchObservedRunningTime="2026-01-17 00:18:19.467626194 +0000 UTC m=+45.889107765" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.396 [INFO][3828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.397 [INFO][3828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" iface="eth0" netns="/var/run/netns/cni-d420440e-aceb-dc72-6c82-2e2bf38dd654" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.400 [INFO][3828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" iface="eth0" netns="/var/run/netns/cni-d420440e-aceb-dc72-6c82-2e2bf38dd654" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.401 [INFO][3828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" iface="eth0" netns="/var/run/netns/cni-d420440e-aceb-dc72-6c82-2e2bf38dd654" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.401 [INFO][3828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.401 [INFO][3828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.477 [INFO][3836] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.477 [INFO][3836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.477 [INFO][3836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.493 [WARNING][3836] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.493 [INFO][3836] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.500 [INFO][3836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:19.509940 containerd[1466]: 2026-01-17 00:18:19.506 [INFO][3828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:19.513324 containerd[1466]: time="2026-01-17T00:18:19.510326539Z" level=info msg="TearDown network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" successfully" Jan 17 00:18:19.513324 containerd[1466]: time="2026-01-17T00:18:19.510762541Z" level=info msg="StopPodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" returns successfully" Jan 17 00:18:19.631673 systemd[1]: run-netns-cni\x2dd420440e\x2daceb\x2ddc72\x2d6c82\x2d2e2bf38dd654.mount: Deactivated successfully. Jan 17 00:18:19.646565 kubelet[2619]: I0117 00:18:19.645762 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjqph\" (UniqueName: \"kubernetes.io/projected/084190ec-90e5-434f-8cbe-774a3d390671-kube-api-access-zjqph\") pod \"084190ec-90e5-434f-8cbe-774a3d390671\" (UID: \"084190ec-90e5-434f-8cbe-774a3d390671\") " Jan 17 00:18:19.646565 kubelet[2619]: I0117 00:18:19.645876 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084190ec-90e5-434f-8cbe-774a3d390671-whisker-ca-bundle\") pod \"084190ec-90e5-434f-8cbe-774a3d390671\" (UID: \"084190ec-90e5-434f-8cbe-774a3d390671\") " Jan 17 00:18:19.646565 kubelet[2619]: I0117 00:18:19.645917 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/084190ec-90e5-434f-8cbe-774a3d390671-whisker-backend-key-pair\") pod \"084190ec-90e5-434f-8cbe-774a3d390671\" (UID: \"084190ec-90e5-434f-8cbe-774a3d390671\") " Jan 17 00:18:19.651176 kubelet[2619]: I0117 00:18:19.650131 2619 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084190ec-90e5-434f-8cbe-774a3d390671-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "084190ec-90e5-434f-8cbe-774a3d390671" (UID: "084190ec-90e5-434f-8cbe-774a3d390671"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:18:19.661843 kubelet[2619]: I0117 00:18:19.661746 2619 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084190ec-90e5-434f-8cbe-774a3d390671-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "084190ec-90e5-434f-8cbe-774a3d390671" (UID: "084190ec-90e5-434f-8cbe-774a3d390671"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:18:19.662160 kubelet[2619]: I0117 00:18:19.661986 2619 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084190ec-90e5-434f-8cbe-774a3d390671-kube-api-access-zjqph" (OuterVolumeSpecName: "kube-api-access-zjqph") pod "084190ec-90e5-434f-8cbe-774a3d390671" (UID: "084190ec-90e5-434f-8cbe-774a3d390671"). InnerVolumeSpecName "kube-api-access-zjqph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:18:19.662427 systemd[1]: var-lib-kubelet-pods-084190ec\x2d90e5\x2d434f\x2d8cbe\x2d774a3d390671-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjqph.mount: Deactivated successfully. Jan 17 00:18:19.663339 systemd[1]: var-lib-kubelet-pods-084190ec\x2d90e5\x2d434f\x2d8cbe\x2d774a3d390671-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:18:19.747221 kubelet[2619]: I0117 00:18:19.747055 2619 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084190ec-90e5-434f-8cbe-774a3d390671-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" DevicePath \"\"" Jan 17 00:18:19.747221 kubelet[2619]: I0117 00:18:19.747142 2619 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjqph\" (UniqueName: \"kubernetes.io/projected/084190ec-90e5-434f-8cbe-774a3d390671-kube-api-access-zjqph\") on node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" DevicePath \"\"" Jan 17 00:18:19.747221 kubelet[2619]: I0117 00:18:19.747165 2619 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/084190ec-90e5-434f-8cbe-774a3d390671-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694\" DevicePath \"\"" Jan 17 00:18:19.921140 systemd[1]: Removed slice kubepods-besteffort-pod084190ec_90e5_434f_8cbe_774a3d390671.slice - libcontainer container kubepods-besteffort-pod084190ec_90e5_434f_8cbe_774a3d390671.slice. Jan 17 00:18:20.520684 systemd[1]: Created slice kubepods-besteffort-pod6f99b557_3ce0_4f99_bb06_f6d4f3390790.slice - libcontainer container kubepods-besteffort-pod6f99b557_3ce0_4f99_bb06_f6d4f3390790.slice. Jan 17 00:18:20.654904 kubelet[2619]: I0117 00:18:20.654739 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f99b557-3ce0-4f99-bb06-f6d4f3390790-whisker-ca-bundle\") pod \"whisker-787bd9bbf8-qw494\" (UID: \"6f99b557-3ce0-4f99-bb06-f6d4f3390790\") " pod="calico-system/whisker-787bd9bbf8-qw494" Jan 17 00:18:20.654904 kubelet[2619]: I0117 00:18:20.654823 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbvc\" (UniqueName: \"kubernetes.io/projected/6f99b557-3ce0-4f99-bb06-f6d4f3390790-kube-api-access-ztbvc\") pod \"whisker-787bd9bbf8-qw494\" (UID: \"6f99b557-3ce0-4f99-bb06-f6d4f3390790\") " pod="calico-system/whisker-787bd9bbf8-qw494" Jan 17 00:18:20.655744 kubelet[2619]: I0117 00:18:20.654937 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f99b557-3ce0-4f99-bb06-f6d4f3390790-whisker-backend-key-pair\") pod \"whisker-787bd9bbf8-qw494\" (UID: \"6f99b557-3ce0-4f99-bb06-f6d4f3390790\") " pod="calico-system/whisker-787bd9bbf8-qw494" Jan 17 00:18:20.828619 containerd[1466]: time="2026-01-17T00:18:20.828383067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787bd9bbf8-qw494,Uid:6f99b557-3ce0-4f99-bb06-f6d4f3390790,Namespace:calico-system,Attempt:0,}" Jan 17 00:18:21.194057 systemd-networkd[1370]: cali708e85a058c: Link UP Jan 17 00:18:21.198419 systemd-networkd[1370]: cali708e85a058c: Gained carrier Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:20.956 [INFO][3889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:20.983 [INFO][3889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0 whisker-787bd9bbf8- calico-system 6f99b557-3ce0-4f99-bb06-f6d4f3390790 924 0 2026-01-17 00:18:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:787bd9bbf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 whisker-787bd9bbf8-qw494 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali708e85a058c [] [] }} ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:20.984 [INFO][3889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.064 [INFO][3945] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" HandleID="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.064 [INFO][3945] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" HandleID="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000eaaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"whisker-787bd9bbf8-qw494", "timestamp":"2026-01-17 00:18:21.063997879 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.064 [INFO][3945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.064 [INFO][3945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.064 [INFO][3945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.085 [INFO][3945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.097 [INFO][3945] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.109 [INFO][3945] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.114 [INFO][3945] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.119 [INFO][3945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.120 [INFO][3945] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.123 [INFO][3945] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881 Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.135 [INFO][3945] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.153 [INFO][3945] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.65/26] block=192.168.9.64/26 handle="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.153 [INFO][3945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.65/26] handle="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.154 [INFO][3945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:21.235378 containerd[1466]: 2026-01-17 00:18:21.154 [INFO][3945] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.65/26] IPv6=[] ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" HandleID="k8s-pod-network.5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.239083 containerd[1466]: 2026-01-17 00:18:21.157 [INFO][3889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0", GenerateName:"whisker-787bd9bbf8-", Namespace:"calico-system", SelfLink:"", UID:"6f99b557-3ce0-4f99-bb06-f6d4f3390790", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 18, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787bd9bbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"whisker-787bd9bbf8-qw494", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali708e85a058c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:21.239083 containerd[1466]: 2026-01-17 00:18:21.158 [INFO][3889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.65/32] ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.239083 containerd[1466]: 2026-01-17 00:18:21.158 [INFO][3889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali708e85a058c ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.239083 containerd[1466]: 2026-01-17 00:18:21.201 [INFO][3889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.239083 containerd[1466]: 2026-01-17 00:18:21.204 [INFO][3889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0", GenerateName:"whisker-787bd9bbf8-", Namespace:"calico-system", SelfLink:"", UID:"6f99b557-3ce0-4f99-bb06-f6d4f3390790", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 18, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787bd9bbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881", Pod:"whisker-787bd9bbf8-qw494", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali708e85a058c", MAC:"1a:3b:99:24:ef:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:21.239083 containerd[1466]: 2026-01-17 00:18:21.231 [INFO][3889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881" Namespace="calico-system" Pod="whisker-787bd9bbf8-qw494" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--787bd9bbf8--qw494-eth0" Jan 17 00:18:21.286618 containerd[1466]: time="2026-01-17T00:18:21.285174721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:21.286618 containerd[1466]: time="2026-01-17T00:18:21.285321659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:21.286618 containerd[1466]: time="2026-01-17T00:18:21.285355525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:21.286618 containerd[1466]: time="2026-01-17T00:18:21.285605033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:21.377838 systemd[1]: Started cri-containerd-5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881.scope - libcontainer container 5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881. Jan 17 00:18:21.563247 containerd[1466]: time="2026-01-17T00:18:21.563127987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787bd9bbf8-qw494,Uid:6f99b557-3ce0-4f99-bb06-f6d4f3390790,Namespace:calico-system,Attempt:0,} returns sandbox id \"5db3dee6e6ece8c343b1a409f2969b1aa15caa92570b0c80d2e2024cf415c881\"" Jan 17 00:18:21.570727 containerd[1466]: time="2026-01-17T00:18:21.570217618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:18:21.744505 containerd[1466]: time="2026-01-17T00:18:21.744199817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:21.747493 containerd[1466]: time="2026-01-17T00:18:21.746309998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:18:21.747493 containerd[1466]: time="2026-01-17T00:18:21.746441524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:18:21.747812 kubelet[2619]: E0117 00:18:21.746725 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:18:21.747812 kubelet[2619]: E0117 00:18:21.746823 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:18:21.749576 kubelet[2619]: E0117 00:18:21.747074 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4e7d527b0d249488a1c8abb4df7b11b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:21.752158 containerd[1466]: time="2026-01-17T00:18:21.751426722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:18:21.918026 containerd[1466]: time="2026-01-17T00:18:21.917727412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:21.923967 kubelet[2619]: I0117 00:18:21.923893 2619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084190ec-90e5-434f-8cbe-774a3d390671" path="/var/lib/kubelet/pods/084190ec-90e5-434f-8cbe-774a3d390671/volumes" Jan 17 00:18:21.929403 containerd[1466]: time="2026-01-17T00:18:21.929301826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:18:21.929403 containerd[1466]: time="2026-01-17T00:18:21.929398331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:18:21.930322 kubelet[2619]: E0117 00:18:21.929833 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:18:21.930322 kubelet[2619]: E0117 00:18:21.929940 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:18:21.930768 kubelet[2619]: E0117 00:18:21.930147 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:21.932509 kubelet[2619]: E0117 00:18:21.931918 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:18:22.190563 kernel: bpftool[4048]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:18:22.432514 kubelet[2619]: E0117 00:18:22.431738 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:18:22.670682 systemd-networkd[1370]: cali708e85a058c: Gained IPv6LL Jan 17 00:18:22.714670 systemd-networkd[1370]: vxlan.calico: Link UP Jan 17 00:18:22.714686 systemd-networkd[1370]: vxlan.calico: Gained carrier Jan 17 00:18:22.911648 containerd[1466]: time="2026-01-17T00:18:22.911398179Z" level=info msg="StopPodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\"" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.084 [INFO][4095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.085 [INFO][4095] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" iface="eth0" netns="/var/run/netns/cni-925602f1-e1ad-36a2-f47e-e05b8f247631" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.086 [INFO][4095] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" iface="eth0" netns="/var/run/netns/cni-925602f1-e1ad-36a2-f47e-e05b8f247631" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.087 [INFO][4095] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" iface="eth0" netns="/var/run/netns/cni-925602f1-e1ad-36a2-f47e-e05b8f247631" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.087 [INFO][4095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.087 [INFO][4095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.141 [INFO][4103] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.142 [INFO][4103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.142 [INFO][4103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.168 [WARNING][4103] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.168 [INFO][4103] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.177 [INFO][4103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:23.183060 containerd[1466]: 2026-01-17 00:18:23.180 [INFO][4095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:23.184381 containerd[1466]: time="2026-01-17T00:18:23.183317908Z" level=info msg="TearDown network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" successfully" Jan 17 00:18:23.184381 containerd[1466]: time="2026-01-17T00:18:23.183360903Z" level=info msg="StopPodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" returns successfully" Jan 17 00:18:23.189880 containerd[1466]: time="2026-01-17T00:18:23.184940617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wmqr2,Uid:7a458d0c-d067-4f59-ad18-82fe02f35050,Namespace:kube-system,Attempt:1,}" Jan 17 00:18:23.206161 systemd[1]: run-netns-cni\x2d925602f1\x2de1ad\x2d36a2\x2df47e\x2de05b8f247631.mount: Deactivated successfully. Jan 17 00:18:23.441315 kubelet[2619]: E0117 00:18:23.439756 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:18:23.608575 systemd-networkd[1370]: calia01385edea1: Link UP Jan 17 00:18:23.611792 systemd-networkd[1370]: calia01385edea1: Gained carrier Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.336 [INFO][4109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0 coredns-668d6bf9bc- kube-system 7a458d0c-d067-4f59-ad18-82fe02f35050 949 0 2026-01-17 00:17:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 coredns-668d6bf9bc-wmqr2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia01385edea1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.337 [INFO][4109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.421 [INFO][4123] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" HandleID="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.421 [INFO][4123] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" HandleID="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"coredns-668d6bf9bc-wmqr2", "timestamp":"2026-01-17 00:18:23.421553184 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.422 [INFO][4123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.422 [INFO][4123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.422 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.453 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.496 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.517 [INFO][4123] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.523 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.532 [INFO][4123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.533 [INFO][4123] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.538 [INFO][4123] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6 Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.553 [INFO][4123] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.572 [INFO][4123] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.66/26] block=192.168.9.64/26 handle="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.572 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.66/26] handle="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.572 [INFO][4123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:23.666889 containerd[1466]: 2026-01-17 00:18:23.572 [INFO][4123] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.66/26] IPv6=[] ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" HandleID="k8s-pod-network.af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.671282 containerd[1466]: 2026-01-17 00:18:23.578 [INFO][4109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a458d0c-d067-4f59-ad18-82fe02f35050", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"coredns-668d6bf9bc-wmqr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01385edea1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:23.671282 containerd[1466]: 2026-01-17 00:18:23.580 [INFO][4109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.66/32] ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.671282 containerd[1466]: 2026-01-17 00:18:23.580 [INFO][4109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia01385edea1 ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.671282 containerd[1466]: 2026-01-17 00:18:23.616 [INFO][4109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.671282 containerd[1466]: 2026-01-17 00:18:23.623 [INFO][4109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a458d0c-d067-4f59-ad18-82fe02f35050", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6", Pod:"coredns-668d6bf9bc-wmqr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01385edea1", MAC:"5a:5f:be:db:60:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:23.671282 containerd[1466]: 2026-01-17 00:18:23.660 [INFO][4109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-wmqr2" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:23.731165 containerd[1466]: time="2026-01-17T00:18:23.729834071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:23.731165 containerd[1466]: time="2026-01-17T00:18:23.729940460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:23.731165 containerd[1466]: time="2026-01-17T00:18:23.729969247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:23.731165 containerd[1466]: time="2026-01-17T00:18:23.730157034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:23.810131 systemd[1]: run-containerd-runc-k8s.io-af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6-runc.rubh2j.mount: Deactivated successfully. Jan 17 00:18:23.824908 systemd[1]: Started cri-containerd-af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6.scope - libcontainer container af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6. Jan 17 00:18:23.917677 containerd[1466]: time="2026-01-17T00:18:23.917365495Z" level=info msg="StopPodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\"" Jan 17 00:18:23.994730 containerd[1466]: time="2026-01-17T00:18:23.993873105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wmqr2,Uid:7a458d0c-d067-4f59-ad18-82fe02f35050,Namespace:kube-system,Attempt:1,} returns sandbox id \"af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6\"" Jan 17 00:18:24.030101 containerd[1466]: time="2026-01-17T00:18:24.028870526Z" level=info msg="CreateContainer within sandbox \"af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:18:24.100409 containerd[1466]: time="2026-01-17T00:18:24.099182866Z" level=info msg="CreateContainer within sandbox \"af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"662541d7e2e1b56b195979dc388ffb7e9e86628cca2c69f447472023a610bd47\"" Jan 17 00:18:24.103362 containerd[1466]: time="2026-01-17T00:18:24.102482816Z" level=info msg="StartContainer for \"662541d7e2e1b56b195979dc388ffb7e9e86628cca2c69f447472023a610bd47\"" Jan 17 00:18:24.181884 systemd[1]: Started cri-containerd-662541d7e2e1b56b195979dc388ffb7e9e86628cca2c69f447472023a610bd47.scope - libcontainer container 662541d7e2e1b56b195979dc388ffb7e9e86628cca2c69f447472023a610bd47. Jan 17 00:18:24.334952 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.166 [INFO][4207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.166 [INFO][4207] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" iface="eth0" netns="/var/run/netns/cni-11ac20d5-5d72-579a-65d0-07a440969af0" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.172 [INFO][4207] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" iface="eth0" netns="/var/run/netns/cni-11ac20d5-5d72-579a-65d0-07a440969af0" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.173 [INFO][4207] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" iface="eth0" netns="/var/run/netns/cni-11ac20d5-5d72-579a-65d0-07a440969af0" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.174 [INFO][4207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.174 [INFO][4207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.283 [INFO][4241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.283 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.283 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.312 [WARNING][4241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.312 [INFO][4241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.321 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:24.345521 containerd[1466]: 2026-01-17 00:18:24.326 [INFO][4207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:24.350975 containerd[1466]: time="2026-01-17T00:18:24.348820411Z" level=info msg="TearDown network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" successfully" Jan 17 00:18:24.350975 containerd[1466]: time="2026-01-17T00:18:24.349568536Z" level=info msg="StopPodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" returns successfully" Jan 17 00:18:24.359894 containerd[1466]: time="2026-01-17T00:18:24.357497947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d4d576bf5-8czh9,Uid:c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e,Namespace:calico-system,Attempt:1,}" Jan 17 00:18:24.363878 systemd[1]: run-netns-cni\x2d11ac20d5\x2d5d72\x2d579a\x2d65d0\x2d07a440969af0.mount: Deactivated successfully. Jan 17 00:18:24.367550 containerd[1466]: time="2026-01-17T00:18:24.366281906Z" level=info msg="StartContainer for \"662541d7e2e1b56b195979dc388ffb7e9e86628cca2c69f447472023a610bd47\" returns successfully" Jan 17 00:18:24.737533 systemd-networkd[1370]: cali2be5bae72c6: Link UP Jan 17 00:18:24.741157 systemd-networkd[1370]: cali2be5bae72c6: Gained carrier Jan 17 00:18:24.773916 kubelet[2619]: I0117 00:18:24.772930 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wmqr2" podStartSLOduration=45.772882662 podStartE2EDuration="45.772882662s" podCreationTimestamp="2026-01-17 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:18:24.536008459 +0000 UTC m=+50.957490042" watchObservedRunningTime="2026-01-17 00:18:24.772882662 +0000 UTC m=+51.194364232" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.563 [INFO][4268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0 calico-kube-controllers-d4d576bf5- calico-system c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e 965 0 2026-01-17 00:17:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d4d576bf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 calico-kube-controllers-d4d576bf5-8czh9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2be5bae72c6 [] [] }} ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.563 [INFO][4268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.648 [INFO][4293] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" HandleID="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.650 [INFO][4293] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" HandleID="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"calico-kube-controllers-d4d576bf5-8czh9", "timestamp":"2026-01-17 00:18:24.648239364 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.650 [INFO][4293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.650 [INFO][4293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.651 [INFO][4293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.665 [INFO][4293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.678 [INFO][4293] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.688 [INFO][4293] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.693 [INFO][4293] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.698 [INFO][4293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.698 [INFO][4293] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.702 [INFO][4293] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18 Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.713 [INFO][4293] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.725 [INFO][4293] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.67/26] block=192.168.9.64/26 handle="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.725 [INFO][4293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.67/26] handle="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.725 [INFO][4293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:24.782931 containerd[1466]: 2026-01-17 00:18:24.726 [INFO][4293] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.67/26] IPv6=[] ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" HandleID="k8s-pod-network.c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.789381 containerd[1466]: 2026-01-17 00:18:24.730 [INFO][4268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0", GenerateName:"calico-kube-controllers-d4d576bf5-", Namespace:"calico-system", SelfLink:"", UID:"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d4d576bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"calico-kube-controllers-d4d576bf5-8czh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2be5bae72c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:24.789381 containerd[1466]: 2026-01-17 00:18:24.731 [INFO][4268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.67/32] ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.789381 containerd[1466]: 2026-01-17 00:18:24.731 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2be5bae72c6 ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.789381 containerd[1466]: 2026-01-17 00:18:24.740 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.789381 containerd[1466]: 2026-01-17 00:18:24.741 [INFO][4268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0", GenerateName:"calico-kube-controllers-d4d576bf5-", Namespace:"calico-system", SelfLink:"", UID:"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d4d576bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18", Pod:"calico-kube-controllers-d4d576bf5-8czh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2be5bae72c6", MAC:"9e:33:11:63:50:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:24.789381 containerd[1466]: 2026-01-17 00:18:24.776 [INFO][4268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18" Namespace="calico-system" Pod="calico-kube-controllers-d4d576bf5-8czh9" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:24.841641 containerd[1466]: time="2026-01-17T00:18:24.840829408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:24.841641 containerd[1466]: time="2026-01-17T00:18:24.840940001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:24.841641 containerd[1466]: time="2026-01-17T00:18:24.840969507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:24.841641 containerd[1466]: time="2026-01-17T00:18:24.841136069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:24.914335 containerd[1466]: time="2026-01-17T00:18:24.913589347Z" level=info msg="StopPodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\"" Jan 17 00:18:24.917059 containerd[1466]: time="2026-01-17T00:18:24.916995958Z" level=info msg="StopPodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\"" Jan 17 00:18:24.935292 systemd[1]: Started cri-containerd-c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18.scope - libcontainer container c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18. Jan 17 00:18:25.166826 systemd-networkd[1370]: calia01385edea1: Gained IPv6LL Jan 17 00:18:25.313431 containerd[1466]: time="2026-01-17T00:18:25.313367818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d4d576bf5-8czh9,Uid:c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e,Namespace:calico-system,Attempt:1,} returns sandbox id \"c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18\"" Jan 17 00:18:25.322608 containerd[1466]: time="2026-01-17T00:18:25.322096907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.152 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.153 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" iface="eth0" netns="/var/run/netns/cni-edb80306-69a2-c342-c2ab-6a10a11a0c0d" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.153 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" iface="eth0" netns="/var/run/netns/cni-edb80306-69a2-c342-c2ab-6a10a11a0c0d" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.154 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" iface="eth0" netns="/var/run/netns/cni-edb80306-69a2-c342-c2ab-6a10a11a0c0d" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.154 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.154 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.278 [INFO][4375] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.282 [INFO][4375] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.282 [INFO][4375] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.300 [WARNING][4375] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.300 [INFO][4375] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.306 [INFO][4375] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:25.322832 containerd[1466]: 2026-01-17 00:18:25.318 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:25.328824 containerd[1466]: time="2026-01-17T00:18:25.322855175Z" level=info msg="TearDown network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" successfully" Jan 17 00:18:25.328824 containerd[1466]: time="2026-01-17T00:18:25.322891850Z" level=info msg="StopPodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" returns successfully" Jan 17 00:18:25.328824 containerd[1466]: time="2026-01-17T00:18:25.325084024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-969cl,Uid:0a073190-14cb-45b8-a9bf-4fd4665cfd04,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:18:25.334569 systemd[1]: run-netns-cni\x2dedb80306\x2d69a2\x2dc342\x2dc2ab\x2d6a10a11a0c0d.mount: Deactivated successfully. Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.157 [INFO][4357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.159 [INFO][4357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" iface="eth0" netns="/var/run/netns/cni-42507bb9-c552-ac0e-5bff-18126ef970f1" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.160 [INFO][4357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" iface="eth0" netns="/var/run/netns/cni-42507bb9-c552-ac0e-5bff-18126ef970f1" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.161 [INFO][4357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" iface="eth0" netns="/var/run/netns/cni-42507bb9-c552-ac0e-5bff-18126ef970f1" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.161 [INFO][4357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.162 [INFO][4357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.291 [INFO][4380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.292 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.308 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.338 [WARNING][4380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.338 [INFO][4380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.343 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:25.349330 containerd[1466]: 2026-01-17 00:18:25.346 [INFO][4357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:25.353357 containerd[1466]: time="2026-01-17T00:18:25.350985724Z" level=info msg="TearDown network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" successfully" Jan 17 00:18:25.353357 containerd[1466]: time="2026-01-17T00:18:25.351040774Z" level=info msg="StopPodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" returns successfully" Jan 17 00:18:25.355486 containerd[1466]: time="2026-01-17T00:18:25.355403999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxqn4,Uid:05d5b250-556f-4421-995f-92aeade92625,Namespace:calico-system,Attempt:1,}" Jan 17 00:18:25.358153 systemd[1]: run-netns-cni\x2d42507bb9\x2dc552\x2dac0e\x2d5bff\x2d18126ef970f1.mount: Deactivated successfully. Jan 17 00:18:25.583993 containerd[1466]: time="2026-01-17T00:18:25.581524917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:25.589099 containerd[1466]: time="2026-01-17T00:18:25.587203993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:18:25.589511 containerd[1466]: time="2026-01-17T00:18:25.589377079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:18:25.589859 kubelet[2619]: E0117 00:18:25.589798 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:18:25.589990 kubelet[2619]: E0117 00:18:25.589880 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:18:25.590169 kubelet[2619]: E0117 00:18:25.590093 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d4d576bf5-8czh9_calico-system(c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:25.591950 kubelet[2619]: E0117 00:18:25.591782 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:18:25.708382 systemd-networkd[1370]: cali2870be1e19b: Link UP Jan 17 00:18:25.715367 systemd-networkd[1370]: cali2870be1e19b: Gained carrier Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.514 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0 csi-node-driver- calico-system 05d5b250-556f-4421-995f-92aeade92625 979 0 2026-01-17 00:17:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 csi-node-driver-hxqn4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2870be1e19b [] [] }} ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.514 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.579 [INFO][4417] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" HandleID="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.581 [INFO][4417] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" HandleID="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"csi-node-driver-hxqn4", "timestamp":"2026-01-17 00:18:25.579376671 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.581 [INFO][4417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.581 [INFO][4417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.581 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.612 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.628 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.639 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.644 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.651 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.651 [INFO][4417] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.660 [INFO][4417] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91 Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.668 [INFO][4417] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.682 [INFO][4417] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.68/26] block=192.168.9.64/26 handle="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.684 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.68/26] handle="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.684 [INFO][4417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:25.756715 containerd[1466]: 2026-01-17 00:18:25.685 [INFO][4417] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.68/26] IPv6=[] ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" HandleID="k8s-pod-network.f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.757960 containerd[1466]: 2026-01-17 00:18:25.695 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05d5b250-556f-4421-995f-92aeade92625", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"csi-node-driver-hxqn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2870be1e19b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:25.757960 containerd[1466]: 2026-01-17 00:18:25.696 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.68/32] ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.757960 containerd[1466]: 2026-01-17 00:18:25.696 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2870be1e19b ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.757960 containerd[1466]: 2026-01-17 00:18:25.714 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.757960 containerd[1466]: 2026-01-17 00:18:25.719 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05d5b250-556f-4421-995f-92aeade92625", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91", Pod:"csi-node-driver-hxqn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2870be1e19b", MAC:"da:27:b2:be:b3:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:25.757960 containerd[1466]: 2026-01-17 00:18:25.750 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91" Namespace="calico-system" Pod="csi-node-driver-hxqn4" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:25.836033 systemd-networkd[1370]: cali72ff257ff2d: Link UP Jan 17 00:18:25.839794 systemd-networkd[1370]: cali72ff257ff2d: Gained carrier Jan 17 00:18:25.857575 containerd[1466]: time="2026-01-17T00:18:25.854070358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:25.857575 containerd[1466]: time="2026-01-17T00:18:25.855072112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:25.857575 containerd[1466]: time="2026-01-17T00:18:25.855130500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:25.857575 containerd[1466]: time="2026-01-17T00:18:25.855366888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.549 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0 calico-apiserver-6ffcc6648d- calico-apiserver 0a073190-14cb-45b8-a9bf-4fd4665cfd04 978 0 2026-01-17 00:17:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ffcc6648d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 calico-apiserver-6ffcc6648d-969cl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72ff257ff2d [] [] }} ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.566 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.664 [INFO][4426] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" HandleID="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.665 [INFO][4426] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" HandleID="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fdf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"calico-apiserver-6ffcc6648d-969cl", "timestamp":"2026-01-17 00:18:25.664984861 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.666 [INFO][4426] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.685 [INFO][4426] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.685 [INFO][4426] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.716 [INFO][4426] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.742 [INFO][4426] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.757 [INFO][4426] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.770 [INFO][4426] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.778 [INFO][4426] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.779 [INFO][4426] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.784 [INFO][4426] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247 Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.796 [INFO][4426] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.814 [INFO][4426] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.69/26] block=192.168.9.64/26 handle="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.814 [INFO][4426] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.69/26] handle="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.814 [INFO][4426] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:25.883878 containerd[1466]: 2026-01-17 00:18:25.814 [INFO][4426] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.69/26] IPv6=[] ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" HandleID="k8s-pod-network.a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.885169 containerd[1466]: 2026-01-17 00:18:25.822 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a073190-14cb-45b8-a9bf-4fd4665cfd04", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"calico-apiserver-6ffcc6648d-969cl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72ff257ff2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:25.885169 containerd[1466]: 2026-01-17 00:18:25.824 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.69/32] ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.885169 containerd[1466]: 2026-01-17 00:18:25.824 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72ff257ff2d ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.885169 containerd[1466]: 2026-01-17 00:18:25.838 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.885169 containerd[1466]: 2026-01-17 00:18:25.845 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a073190-14cb-45b8-a9bf-4fd4665cfd04", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247", Pod:"calico-apiserver-6ffcc6648d-969cl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72ff257ff2d", MAC:"3e:55:0c:7b:b3:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:25.885169 containerd[1466]: 2026-01-17 00:18:25.876 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-969cl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:25.932500 containerd[1466]: time="2026-01-17T00:18:25.932207906Z" level=info msg="StopPodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\"" Jan 17 00:18:25.949024 containerd[1466]: time="2026-01-17T00:18:25.948353481Z" level=info msg="StopPodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\"" Jan 17 00:18:25.952873 systemd[1]: Started cri-containerd-f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91.scope - libcontainer container f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91. Jan 17 00:18:26.041121 containerd[1466]: time="2026-01-17T00:18:26.040115418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:26.042052 containerd[1466]: time="2026-01-17T00:18:26.041668547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:26.042052 containerd[1466]: time="2026-01-17T00:18:26.041737594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:26.042052 containerd[1466]: time="2026-01-17T00:18:26.041922562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:26.127215 systemd-networkd[1370]: cali2be5bae72c6: Gained IPv6LL Jan 17 00:18:26.152535 containerd[1466]: time="2026-01-17T00:18:26.151703230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxqn4,Uid:05d5b250-556f-4421-995f-92aeade92625,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91\"" Jan 17 00:18:26.160935 containerd[1466]: time="2026-01-17T00:18:26.160854795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:18:26.162846 systemd[1]: Started cri-containerd-a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247.scope - libcontainer container a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247. Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.140 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.141 [INFO][4516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" iface="eth0" netns="/var/run/netns/cni-843512ae-4a49-92ff-5af1-c563edbaff83" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.142 [INFO][4516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" iface="eth0" netns="/var/run/netns/cni-843512ae-4a49-92ff-5af1-c563edbaff83" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.143 [INFO][4516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" iface="eth0" netns="/var/run/netns/cni-843512ae-4a49-92ff-5af1-c563edbaff83" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.143 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.144 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.251 [INFO][4547] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.251 [INFO][4547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.251 [INFO][4547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.282 [WARNING][4547] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.283 [INFO][4547] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.286 [INFO][4547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:26.300312 containerd[1466]: 2026-01-17 00:18:26.293 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:26.303208 containerd[1466]: time="2026-01-17T00:18:26.300667035Z" level=info msg="TearDown network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" successfully" Jan 17 00:18:26.303208 containerd[1466]: time="2026-01-17T00:18:26.300712308Z" level=info msg="StopPodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" returns successfully" Jan 17 00:18:26.304106 containerd[1466]: time="2026-01-17T00:18:26.304046192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-2jknj,Uid:3475ce39-a584-4708-980c-68f68b25eff1,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:18:26.350576 systemd[1]: run-netns-cni\x2d843512ae\x2d4a49\x2d92ff\x2d5af1\x2dc563edbaff83.mount: Deactivated successfully. Jan 17 00:18:26.372064 containerd[1466]: time="2026-01-17T00:18:26.371433256Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:26.382308 containerd[1466]: time="2026-01-17T00:18:26.382127416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:18:26.383292 containerd[1466]: time="2026-01-17T00:18:26.382436235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:18:26.385498 kubelet[2619]: E0117 00:18:26.384037 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:18:26.385498 kubelet[2619]: E0117 00:18:26.384117 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:18:26.385498 kubelet[2619]: E0117 00:18:26.384309 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:26.395515 containerd[1466]: time="2026-01-17T00:18:26.393144594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:18:26.429823 containerd[1466]: time="2026-01-17T00:18:26.429009581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-969cl,Uid:0a073190-14cb-45b8-a9bf-4fd4665cfd04,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247\"" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.256 [INFO][4515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.256 [INFO][4515] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" iface="eth0" netns="/var/run/netns/cni-b872640f-e8af-40ed-69f7-c428b27d7634" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.259 [INFO][4515] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" iface="eth0" netns="/var/run/netns/cni-b872640f-e8af-40ed-69f7-c428b27d7634" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.260 [INFO][4515] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" iface="eth0" netns="/var/run/netns/cni-b872640f-e8af-40ed-69f7-c428b27d7634" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.260 [INFO][4515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.260 [INFO][4515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.342 [INFO][4563] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.343 [INFO][4563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.343 [INFO][4563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.365 [WARNING][4563] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.368 [INFO][4563] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.379 [INFO][4563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:26.442270 containerd[1466]: 2026-01-17 00:18:26.426 [INFO][4515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:26.449398 containerd[1466]: time="2026-01-17T00:18:26.443787708Z" level=info msg="TearDown network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" successfully" Jan 17 00:18:26.449398 containerd[1466]: time="2026-01-17T00:18:26.443844439Z" level=info msg="StopPodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" returns successfully" Jan 17 00:18:26.453008 systemd[1]: run-netns-cni\x2db872640f\x2de8af\x2d40ed\x2d69f7\x2dc428b27d7634.mount: Deactivated successfully. Jan 17 00:18:26.458006 kubelet[2619]: I0117 00:18:26.456966 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:18:26.461364 containerd[1466]: time="2026-01-17T00:18:26.461285806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sdfcr,Uid:fe061b2a-805b-43bd-8451-203c834c880a,Namespace:calico-system,Attempt:1,}" Jan 17 00:18:26.577616 kubelet[2619]: E0117 00:18:26.577516 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:18:26.639176 containerd[1466]: time="2026-01-17T00:18:26.638065455Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:26.649923 containerd[1466]: time="2026-01-17T00:18:26.649251387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:18:26.652609 containerd[1466]: time="2026-01-17T00:18:26.651140061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:18:26.658322 kubelet[2619]: E0117 00:18:26.657164 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:18:26.658322 kubelet[2619]: E0117 00:18:26.657254 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:18:26.658322 kubelet[2619]: E0117 00:18:26.657685 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:26.660964 kubelet[2619]: E0117 00:18:26.659406 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:26.661245 containerd[1466]: time="2026-01-17T00:18:26.660142216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:18:26.862360 containerd[1466]: time="2026-01-17T00:18:26.861342369Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:26.863774 systemd-networkd[1370]: cali5950372b25f: Link UP Jan 17 00:18:26.867262 containerd[1466]: time="2026-01-17T00:18:26.867033897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:26.867233 systemd-networkd[1370]: cali5950372b25f: Gained carrier Jan 17 00:18:26.868798 containerd[1466]: time="2026-01-17T00:18:26.867224470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:18:26.871480 kubelet[2619]: E0117 00:18:26.869254 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:26.871480 kubelet[2619]: E0117 00:18:26.869331 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:26.871480 kubelet[2619]: E0117 00:18:26.869559 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ffcc6648d-969cl_calico-apiserver(0a073190-14cb-45b8-a9bf-4fd4665cfd04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:26.872727 kubelet[2619]: E0117 00:18:26.872650 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:18:26.895834 systemd-networkd[1370]: cali72ff257ff2d: Gained IPv6LL Jan 17 00:18:26.915781 containerd[1466]: time="2026-01-17T00:18:26.914832392Z" level=info msg="StopPodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\"" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.543 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0 calico-apiserver-6ffcc6648d- calico-apiserver 3475ce39-a584-4708-980c-68f68b25eff1 995 0 2026-01-17 00:17:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ffcc6648d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 calico-apiserver-6ffcc6648d-2jknj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5950372b25f [] [] }} ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.544 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.725 [INFO][4617] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" HandleID="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.725 [INFO][4617] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" HandleID="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"calico-apiserver-6ffcc6648d-2jknj", "timestamp":"2026-01-17 00:18:26.725290969 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.725 [INFO][4617] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.725 [INFO][4617] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.725 [INFO][4617] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.750 [INFO][4617] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.769 [INFO][4617] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.793 [INFO][4617] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.800 [INFO][4617] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.808 [INFO][4617] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.808 [INFO][4617] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.812 [INFO][4617] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350 Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.829 [INFO][4617] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.847 [INFO][4617] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.70/26] block=192.168.9.64/26 handle="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.847 [INFO][4617] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.70/26] handle="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.847 [INFO][4617] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:26.919716 containerd[1466]: 2026-01-17 00:18:26.847 [INFO][4617] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.70/26] IPv6=[] ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" HandleID="k8s-pod-network.6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.923899 containerd[1466]: 2026-01-17 00:18:26.853 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3475ce39-a584-4708-980c-68f68b25eff1", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"calico-apiserver-6ffcc6648d-2jknj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950372b25f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:26.923899 containerd[1466]: 2026-01-17 00:18:26.854 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.70/32] ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.923899 containerd[1466]: 2026-01-17 00:18:26.854 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5950372b25f ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.923899 containerd[1466]: 2026-01-17 00:18:26.870 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.923899 containerd[1466]: 2026-01-17 00:18:26.872 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3475ce39-a584-4708-980c-68f68b25eff1", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350", Pod:"calico-apiserver-6ffcc6648d-2jknj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950372b25f", MAC:"56:fe:89:9d:26:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:26.923899 containerd[1466]: 2026-01-17 00:18:26.911 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350" Namespace="calico-apiserver" Pod="calico-apiserver-6ffcc6648d-2jknj" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:26.960102 systemd-networkd[1370]: cali2870be1e19b: Gained IPv6LL Jan 17 00:18:27.075835 containerd[1466]: time="2026-01-17T00:18:27.074270029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:27.075835 containerd[1466]: time="2026-01-17T00:18:27.074371289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:27.077681 systemd-networkd[1370]: cali0a5e7c5cbb0: Link UP Jan 17 00:18:27.081835 containerd[1466]: time="2026-01-17T00:18:27.081631152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:27.085320 systemd-networkd[1370]: cali0a5e7c5cbb0: Gained carrier Jan 17 00:18:27.087017 containerd[1466]: time="2026-01-17T00:18:27.084541088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.741 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0 goldmane-666569f655- calico-system fe061b2a-805b-43bd-8451-203c834c880a 997 0 2026-01-17 00:17:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 goldmane-666569f655-sdfcr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0a5e7c5cbb0 [] [] }} ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.742 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.841 [INFO][4629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" HandleID="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.842 [INFO][4629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" HandleID="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"goldmane-666569f655-sdfcr", "timestamp":"2026-01-17 00:18:26.841745606 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.842 [INFO][4629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.847 [INFO][4629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.847 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.907 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.942 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.969 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.978 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.993 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:26.993 [INFO][4629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:27.000 [INFO][4629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:27.014 [INFO][4629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:27.036 [INFO][4629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.71/26] block=192.168.9.64/26 handle="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:27.037 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.71/26] handle="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:27.037 [INFO][4629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:27.147390 containerd[1466]: 2026-01-17 00:18:27.038 [INFO][4629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.71/26] IPv6=[] ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" HandleID="k8s-pod-network.59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.151426 containerd[1466]: 2026-01-17 00:18:27.051 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fe061b2a-805b-43bd-8451-203c834c880a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"goldmane-666569f655-sdfcr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a5e7c5cbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:27.151426 containerd[1466]: 2026-01-17 00:18:27.051 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.71/32] ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.151426 containerd[1466]: 2026-01-17 00:18:27.052 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a5e7c5cbb0 ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.151426 containerd[1466]: 2026-01-17 00:18:27.090 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.151426 containerd[1466]: 2026-01-17 00:18:27.095 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fe061b2a-805b-43bd-8451-203c834c880a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c", Pod:"goldmane-666569f655-sdfcr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a5e7c5cbb0", MAC:"fe:7f:04:f0:c8:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:27.151426 containerd[1466]: 2026-01-17 00:18:27.138 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c" Namespace="calico-system" Pod="goldmane-666569f655-sdfcr" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:27.180153 systemd[1]: Started cri-containerd-6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350.scope - libcontainer container 6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350. Jan 17 00:18:27.286018 containerd[1466]: time="2026-01-17T00:18:27.283148956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:27.286018 containerd[1466]: time="2026-01-17T00:18:27.283255534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:27.286018 containerd[1466]: time="2026-01-17T00:18:27.283285730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:27.286018 containerd[1466]: time="2026-01-17T00:18:27.283544375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:27.379638 systemd[1]: Started cri-containerd-59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c.scope - libcontainer container 59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c. Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.247 [INFO][4656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.248 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" iface="eth0" netns="/var/run/netns/cni-a55e6092-1fef-2b65-11f5-882d140ce7ba" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.249 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" iface="eth0" netns="/var/run/netns/cni-a55e6092-1fef-2b65-11f5-882d140ce7ba" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.249 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" iface="eth0" netns="/var/run/netns/cni-a55e6092-1fef-2b65-11f5-882d140ce7ba" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.250 [INFO][4656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.250 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.379 [INFO][4710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.388 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.390 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.416 [WARNING][4710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.416 [INFO][4710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.420 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:27.426860 containerd[1466]: 2026-01-17 00:18:27.423 [INFO][4656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:27.430715 containerd[1466]: time="2026-01-17T00:18:27.430620737Z" level=info msg="TearDown network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" successfully" Jan 17 00:18:27.430989 containerd[1466]: time="2026-01-17T00:18:27.430925912Z" level=info msg="StopPodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" returns successfully" Jan 17 00:18:27.436745 containerd[1466]: time="2026-01-17T00:18:27.435261517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kt8n7,Uid:0b123c30-c4a1-486c-a4a2-f586dab5927b,Namespace:kube-system,Attempt:1,}" Jan 17 00:18:27.438412 systemd[1]: run-netns-cni\x2da55e6092\x2d1fef\x2d2b65\x2d11f5\x2d882d140ce7ba.mount: Deactivated successfully. Jan 17 00:18:27.561780 kubelet[2619]: E0117 00:18:27.561444 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:27.566061 kubelet[2619]: E0117 00:18:27.565895 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:18:27.893657 containerd[1466]: time="2026-01-17T00:18:27.893390549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffcc6648d-2jknj,Uid:3475ce39-a584-4708-980c-68f68b25eff1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350\"" Jan 17 00:18:27.900510 containerd[1466]: time="2026-01-17T00:18:27.900272939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:18:27.923710 systemd-networkd[1370]: cali5064202bf06: Link UP Jan 17 00:18:27.925318 systemd-networkd[1370]: cali5064202bf06: Gained carrier Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.698 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0 coredns-668d6bf9bc- kube-system 0b123c30-c4a1-486c-a4a2-f586dab5927b 1022 0 2026-01-17 00:17:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694 coredns-668d6bf9bc-kt8n7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5064202bf06 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.700 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.785 [INFO][4773] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" HandleID="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.786 [INFO][4773] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" HandleID="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", "pod":"coredns-668d6bf9bc-kt8n7", "timestamp":"2026-01-17 00:18:27.785477004 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.786 [INFO][4773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.786 [INFO][4773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.786 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694' Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.798 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.808 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.818 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.827 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.837 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.837 [INFO][4773] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.848 [INFO][4773] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.867 [INFO][4773] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.902 [INFO][4773] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.72/26] block=192.168.9.64/26 handle="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.902 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.72/26] handle="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" host="ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694" Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.902 [INFO][4773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:27.958496 containerd[1466]: 2026-01-17 00:18:27.902 [INFO][4773] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.72/26] IPv6=[] ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" HandleID="k8s-pod-network.afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.960305 containerd[1466]: 2026-01-17 00:18:27.911 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b123c30-c4a1-486c-a4a2-f586dab5927b", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"", Pod:"coredns-668d6bf9bc-kt8n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5064202bf06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:27.960305 containerd[1466]: 2026-01-17 00:18:27.911 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.72/32] ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.960305 containerd[1466]: 2026-01-17 00:18:27.911 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5064202bf06 ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.960305 containerd[1466]: 2026-01-17 00:18:27.922 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:27.960305 containerd[1466]: 2026-01-17 00:18:27.923 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b123c30-c4a1-486c-a4a2-f586dab5927b", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa", Pod:"coredns-668d6bf9bc-kt8n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5064202bf06", MAC:"16:1c:20:5c:98:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:27.960305 containerd[1466]: 2026-01-17 00:18:27.953 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-kt8n7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:28.034418 containerd[1466]: time="2026-01-17T00:18:28.033566127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:28.034418 containerd[1466]: time="2026-01-17T00:18:28.033655549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:28.034418 containerd[1466]: time="2026-01-17T00:18:28.033702159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:28.034418 containerd[1466]: time="2026-01-17T00:18:28.033854700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:28.068719 containerd[1466]: time="2026-01-17T00:18:28.068522345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sdfcr,Uid:fe061b2a-805b-43bd-8451-203c834c880a,Namespace:calico-system,Attempt:1,} returns sandbox id \"59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c\"" Jan 17 00:18:28.105069 systemd[1]: Started cri-containerd-afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa.scope - libcontainer container afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa. Jan 17 00:18:28.110830 containerd[1466]: time="2026-01-17T00:18:28.108475280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:28.111876 systemd-networkd[1370]: cali5950372b25f: Gained IPv6LL Jan 17 00:18:28.115569 containerd[1466]: time="2026-01-17T00:18:28.115076390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:18:28.115569 containerd[1466]: time="2026-01-17T00:18:28.115232704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:28.115949 kubelet[2619]: E0117 00:18:28.115533 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:28.115949 kubelet[2619]: E0117 00:18:28.115669 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:28.116185 kubelet[2619]: E0117 00:18:28.116001 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ffcc6648d-2jknj_calico-apiserver(3475ce39-a584-4708-980c-68f68b25eff1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:28.118277 kubelet[2619]: E0117 00:18:28.118170 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:28.119597 containerd[1466]: time="2026-01-17T00:18:28.118562885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:18:28.245934 containerd[1466]: time="2026-01-17T00:18:28.245796858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kt8n7,Uid:0b123c30-c4a1-486c-a4a2-f586dab5927b,Namespace:kube-system,Attempt:1,} returns sandbox id \"afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa\"" Jan 17 00:18:28.253328 containerd[1466]: time="2026-01-17T00:18:28.253249045Z" level=info msg="CreateContainer within sandbox \"afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:18:28.285181 containerd[1466]: time="2026-01-17T00:18:28.284818287Z" level=info msg="CreateContainer within sandbox \"afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ea6faea7d07423c4f1bca1b0385377fa88d716b49470735e42bf65811209233\"" Jan 17 00:18:28.286741 containerd[1466]: time="2026-01-17T00:18:28.286674712Z" level=info msg="StartContainer for \"6ea6faea7d07423c4f1bca1b0385377fa88d716b49470735e42bf65811209233\"" Jan 17 00:18:28.301635 containerd[1466]: time="2026-01-17T00:18:28.301502847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:28.303913 systemd-networkd[1370]: cali0a5e7c5cbb0: Gained IPv6LL Jan 17 00:18:28.305725 kubelet[2619]: E0117 00:18:28.304603 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:18:28.305725 kubelet[2619]: E0117 00:18:28.304670 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:18:28.305725 kubelet[2619]: E0117 00:18:28.304875 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm59t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sdfcr_calico-system(fe061b2a-805b-43bd-8451-203c834c880a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:28.306087 containerd[1466]: time="2026-01-17T00:18:28.304178828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:18:28.306087 containerd[1466]: time="2026-01-17T00:18:28.304332058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:28.307394 kubelet[2619]: E0117 00:18:28.306624 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:28.380845 systemd[1]: Started cri-containerd-6ea6faea7d07423c4f1bca1b0385377fa88d716b49470735e42bf65811209233.scope - libcontainer container 6ea6faea7d07423c4f1bca1b0385377fa88d716b49470735e42bf65811209233. Jan 17 00:18:28.466143 containerd[1466]: time="2026-01-17T00:18:28.466051530Z" level=info msg="StartContainer for \"6ea6faea7d07423c4f1bca1b0385377fa88d716b49470735e42bf65811209233\" returns successfully" Jan 17 00:18:28.566635 kubelet[2619]: E0117 00:18:28.564655 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:28.572042 kubelet[2619]: E0117 00:18:28.571971 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:28.663581 kubelet[2619]: I0117 00:18:28.663433 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kt8n7" podStartSLOduration=49.663402312 podStartE2EDuration="49.663402312s" podCreationTimestamp="2026-01-17 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:18:28.633917777 +0000 UTC m=+55.055399363" watchObservedRunningTime="2026-01-17 00:18:28.663402312 +0000 UTC m=+55.084883883" Jan 17 00:18:29.263705 systemd-networkd[1370]: cali5064202bf06: Gained IPv6LL Jan 17 00:18:29.583346 kubelet[2619]: E0117 00:18:29.582924 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:29.584415 kubelet[2619]: E0117 00:18:29.584338 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:31.754735 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.9.64:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.9.64:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 8 cali708e85a058c [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 9 vxlan.calico [fe80::64eb:e9ff:fe5a:2b17%5]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 10 calia01385edea1 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 11 cali2be5bae72c6 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 12 cali2870be1e19b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 13 cali72ff257ff2d [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 14 cali5950372b25f [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 15 cali0a5e7c5cbb0 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:18:31.756390 ntpd[1428]: 17 Jan 00:18:31 ntpd[1428]: Listen normally on 16 cali5064202bf06 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:18:31.754959 ntpd[1428]: Listen normally on 8 cali708e85a058c [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:18:31.755100 ntpd[1428]: Listen normally on 9 vxlan.calico [fe80::64eb:e9ff:fe5a:2b17%5]:123 Jan 17 00:18:31.755165 ntpd[1428]: Listen normally on 10 calia01385edea1 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:18:31.755232 ntpd[1428]: Listen normally on 11 cali2be5bae72c6 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:18:31.755293 ntpd[1428]: Listen normally on 12 cali2870be1e19b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:18:31.755349 ntpd[1428]: Listen normally on 13 cali72ff257ff2d [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:18:31.755420 ntpd[1428]: Listen normally on 14 cali5950372b25f [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:18:31.755507 ntpd[1428]: Listen normally on 15 cali0a5e7c5cbb0 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:18:31.755567 ntpd[1428]: Listen normally on 16 cali5064202bf06 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:18:33.898953 containerd[1466]: time="2026-01-17T00:18:33.898881304Z" level=info msg="StopPodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\"" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.007 [WARNING][4915] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.008 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.008 [INFO][4915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" iface="eth0" netns="" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.008 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.008 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.102 [INFO][4926] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.103 [INFO][4926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.104 [INFO][4926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.119 [WARNING][4926] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.119 [INFO][4926] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.122 [INFO][4926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:34.129925 containerd[1466]: 2026-01-17 00:18:34.126 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.131291 containerd[1466]: time="2026-01-17T00:18:34.130023549Z" level=info msg="TearDown network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" successfully" Jan 17 00:18:34.131291 containerd[1466]: time="2026-01-17T00:18:34.130162547Z" level=info msg="StopPodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" returns successfully" Jan 17 00:18:34.133708 containerd[1466]: time="2026-01-17T00:18:34.133620758Z" level=info msg="RemovePodSandbox for \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\"" Jan 17 00:18:34.133708 containerd[1466]: time="2026-01-17T00:18:34.133808313Z" level=info msg="Forcibly stopping sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\"" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.254 [WARNING][4942] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.254 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.254 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" iface="eth0" netns="" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.254 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.255 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.306 [INFO][4952] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.306 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.306 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.332 [WARNING][4952] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.332 [INFO][4952] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" HandleID="k8s-pod-network.8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-whisker--5dfbff4764--c2gj6-eth0" Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.341 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:34.347243 containerd[1466]: 2026-01-17 00:18:34.343 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b" Jan 17 00:18:34.348064 containerd[1466]: time="2026-01-17T00:18:34.347338315Z" level=info msg="TearDown network for sandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" successfully" Jan 17 00:18:34.357152 containerd[1466]: time="2026-01-17T00:18:34.356732728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:34.357152 containerd[1466]: time="2026-01-17T00:18:34.356851572Z" level=info msg="RemovePodSandbox \"8a49afe1e6e143cf1e208e33db427338edc6999ea1dd89c519911dfa9860059b\" returns successfully" Jan 17 00:18:34.358016 containerd[1466]: time="2026-01-17T00:18:34.357879978Z" level=info msg="StopPodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\"" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.462 [WARNING][4967] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05d5b250-556f-4421-995f-92aeade92625", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91", Pod:"csi-node-driver-hxqn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2870be1e19b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.463 [INFO][4967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.465 [INFO][4967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" iface="eth0" netns="" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.465 [INFO][4967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.465 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.571 [INFO][4974] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.572 [INFO][4974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.573 [INFO][4974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.607 [WARNING][4974] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.608 [INFO][4974] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.634 [INFO][4974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:34.643751 containerd[1466]: 2026-01-17 00:18:34.638 [INFO][4967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.643751 containerd[1466]: time="2026-01-17T00:18:34.643688895Z" level=info msg="TearDown network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" successfully" Jan 17 00:18:34.643751 containerd[1466]: time="2026-01-17T00:18:34.643729608Z" level=info msg="StopPodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" returns successfully" Jan 17 00:18:34.646612 containerd[1466]: time="2026-01-17T00:18:34.646569896Z" level=info msg="RemovePodSandbox for \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\"" Jan 17 00:18:34.646702 containerd[1466]: time="2026-01-17T00:18:34.646625189Z" level=info msg="Forcibly stopping sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\"" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.779 [WARNING][4990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05d5b250-556f-4421-995f-92aeade92625", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"f7e76f290b8789404fabd63c80c514e8c8927619b2117e6bb5430712ef7bba91", Pod:"csi-node-driver-hxqn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2870be1e19b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.781 [INFO][4990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.781 [INFO][4990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" iface="eth0" netns="" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.781 [INFO][4990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.781 [INFO][4990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.856 [INFO][4997] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.856 [INFO][4997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.856 [INFO][4997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.887 [WARNING][4997] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.887 [INFO][4997] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" HandleID="k8s-pod-network.87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-csi--node--driver--hxqn4-eth0" Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.896 [INFO][4997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:34.903768 containerd[1466]: 2026-01-17 00:18:34.900 [INFO][4990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400" Jan 17 00:18:34.903768 containerd[1466]: time="2026-01-17T00:18:34.903726022Z" level=info msg="TearDown network for sandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" successfully" Jan 17 00:18:34.912766 containerd[1466]: time="2026-01-17T00:18:34.912677714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:34.913001 containerd[1466]: time="2026-01-17T00:18:34.912807751Z" level=info msg="RemovePodSandbox \"87e5c510f49118b65e5b447a14a6084357143064057565ed898d7b62be213400\" returns successfully" Jan 17 00:18:34.915347 containerd[1466]: time="2026-01-17T00:18:34.913745120Z" level=info msg="StopPodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\"" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:34.992 [WARNING][5011] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fe061b2a-805b-43bd-8451-203c834c880a", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c", Pod:"goldmane-666569f655-sdfcr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a5e7c5cbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:34.992 [INFO][5011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:34.993 [INFO][5011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" iface="eth0" netns="" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:34.993 [INFO][5011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:34.993 [INFO][5011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.039 [INFO][5018] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.039 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.039 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.052 [WARNING][5018] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.052 [INFO][5018] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.058 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:35.063877 containerd[1466]: 2026-01-17 00:18:35.061 [INFO][5011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.066095 containerd[1466]: time="2026-01-17T00:18:35.064427679Z" level=info msg="TearDown network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" successfully" Jan 17 00:18:35.066095 containerd[1466]: time="2026-01-17T00:18:35.064554661Z" level=info msg="StopPodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" returns successfully" Jan 17 00:18:35.067119 containerd[1466]: time="2026-01-17T00:18:35.065443880Z" level=info msg="RemovePodSandbox for \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\"" Jan 17 00:18:35.067119 containerd[1466]: time="2026-01-17T00:18:35.066289698Z" level=info msg="Forcibly stopping sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\"" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.154 [WARNING][5032] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fe061b2a-805b-43bd-8451-203c834c880a", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"59997912b2e93a17e817ffd1ebade8b7a6e3e63a7b9120e4945119bf9062d77c", Pod:"goldmane-666569f655-sdfcr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a5e7c5cbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.155 [INFO][5032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.155 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" iface="eth0" netns="" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.155 [INFO][5032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.155 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.210 [INFO][5039] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.211 [INFO][5039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.212 [INFO][5039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.222 [WARNING][5039] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.222 [INFO][5039] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" HandleID="k8s-pod-network.d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-goldmane--666569f655--sdfcr-eth0" Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.231 [INFO][5039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:35.237661 containerd[1466]: 2026-01-17 00:18:35.234 [INFO][5032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189" Jan 17 00:18:35.238694 containerd[1466]: time="2026-01-17T00:18:35.237748790Z" level=info msg="TearDown network for sandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" successfully" Jan 17 00:18:35.244392 containerd[1466]: time="2026-01-17T00:18:35.244298463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:35.245620 containerd[1466]: time="2026-01-17T00:18:35.244433694Z" level=info msg="RemovePodSandbox \"d75d9ca12237709d0413cca10f586c13ccc3dacfc9eb9dbaec30c4f5f2dba189\" returns successfully" Jan 17 00:18:35.245620 containerd[1466]: time="2026-01-17T00:18:35.245519436Z" level=info msg="StopPodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\"" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.361 [WARNING][5053] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3475ce39-a584-4708-980c-68f68b25eff1", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350", Pod:"calico-apiserver-6ffcc6648d-2jknj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950372b25f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.362 [INFO][5053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.362 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" iface="eth0" netns="" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.362 [INFO][5053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.362 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.421 [INFO][5060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.422 [INFO][5060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.422 [INFO][5060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.441 [WARNING][5060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.441 [INFO][5060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.446 [INFO][5060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:35.454384 containerd[1466]: 2026-01-17 00:18:35.450 [INFO][5053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.455588 containerd[1466]: time="2026-01-17T00:18:35.454558953Z" level=info msg="TearDown network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" successfully" Jan 17 00:18:35.455588 containerd[1466]: time="2026-01-17T00:18:35.454606200Z" level=info msg="StopPodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" returns successfully" Jan 17 00:18:35.457167 containerd[1466]: time="2026-01-17T00:18:35.457092680Z" level=info msg="RemovePodSandbox for \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\"" Jan 17 00:18:35.457167 containerd[1466]: time="2026-01-17T00:18:35.457162813Z" level=info msg="Forcibly stopping sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\"" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.546 [WARNING][5074] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3475ce39-a584-4708-980c-68f68b25eff1", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"6d033673ae0e75ea1266528925c9bd789cbe25be6bc019b9145fff7402a00350", Pod:"calico-apiserver-6ffcc6648d-2jknj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5950372b25f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.547 [INFO][5074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.547 [INFO][5074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" iface="eth0" netns="" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.547 [INFO][5074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.547 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.594 [INFO][5082] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.595 [INFO][5082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.595 [INFO][5082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.619 [WARNING][5082] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.620 [INFO][5082] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" HandleID="k8s-pod-network.0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--2jknj-eth0" Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.635 [INFO][5082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:35.646731 containerd[1466]: 2026-01-17 00:18:35.640 [INFO][5074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a" Jan 17 00:18:35.646731 containerd[1466]: time="2026-01-17T00:18:35.646206920Z" level=info msg="TearDown network for sandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" successfully" Jan 17 00:18:35.659290 containerd[1466]: time="2026-01-17T00:18:35.659186259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:35.659657 containerd[1466]: time="2026-01-17T00:18:35.659316701Z" level=info msg="RemovePodSandbox \"0cf366e89d5eb62cd9f3218ebe2a8d58b7a1831316822b41e4ac380f1d4e829a\" returns successfully" Jan 17 00:18:35.661323 containerd[1466]: time="2026-01-17T00:18:35.660851369Z" level=info msg="StopPodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\"" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.743 [WARNING][5096] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a073190-14cb-45b8-a9bf-4fd4665cfd04", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247", Pod:"calico-apiserver-6ffcc6648d-969cl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72ff257ff2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.745 [INFO][5096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.745 [INFO][5096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" iface="eth0" netns="" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.745 [INFO][5096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.745 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.808 [INFO][5103] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.809 [INFO][5103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.809 [INFO][5103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.824 [WARNING][5103] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.824 [INFO][5103] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.826 [INFO][5103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:35.832268 containerd[1466]: 2026-01-17 00:18:35.829 [INFO][5096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:35.836528 containerd[1466]: time="2026-01-17T00:18:35.833482062Z" level=info msg="TearDown network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" successfully" Jan 17 00:18:35.836528 containerd[1466]: time="2026-01-17T00:18:35.833541365Z" level=info msg="StopPodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" returns successfully" Jan 17 00:18:35.836528 containerd[1466]: time="2026-01-17T00:18:35.836297968Z" level=info msg="RemovePodSandbox for \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\"" Jan 17 00:18:35.836528 containerd[1466]: time="2026-01-17T00:18:35.836351627Z" level=info msg="Forcibly stopping sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\"" Jan 17 00:18:35.919320 containerd[1466]: time="2026-01-17T00:18:35.919240228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.926 [WARNING][5117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0", GenerateName:"calico-apiserver-6ffcc6648d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a073190-14cb-45b8-a9bf-4fd4665cfd04", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffcc6648d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"a64e7b1fe78bc0e116391271209b4d3afa1798c16b868d1589bc3237c4e4e247", Pod:"calico-apiserver-6ffcc6648d-969cl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72ff257ff2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.927 [INFO][5117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.927 [INFO][5117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" iface="eth0" netns="" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.927 [INFO][5117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.927 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.983 [INFO][5124] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.983 [INFO][5124] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:35.983 [INFO][5124] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:36.005 [WARNING][5124] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:36.005 [INFO][5124] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" HandleID="k8s-pod-network.ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--apiserver--6ffcc6648d--969cl-eth0" Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:36.010 [INFO][5124] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:36.017497 containerd[1466]: 2026-01-17 00:18:36.012 [INFO][5117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b" Jan 17 00:18:36.017497 containerd[1466]: time="2026-01-17T00:18:36.015834767Z" level=info msg="TearDown network for sandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" successfully" Jan 17 00:18:36.022491 containerd[1466]: time="2026-01-17T00:18:36.022346179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:36.022714 containerd[1466]: time="2026-01-17T00:18:36.022563595Z" level=info msg="RemovePodSandbox \"ac3e17789e33734e6f39e2869374799c6e77c25f3ce2b710e602de5ba0ca705b\" returns successfully" Jan 17 00:18:36.023483 containerd[1466]: time="2026-01-17T00:18:36.023413992Z" level=info msg="StopPodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\"" Jan 17 00:18:36.093591 containerd[1466]: time="2026-01-17T00:18:36.093489684Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:36.096686 containerd[1466]: time="2026-01-17T00:18:36.096525862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:18:36.096921 containerd[1466]: time="2026-01-17T00:18:36.096567612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:18:36.097129 kubelet[2619]: E0117 00:18:36.097056 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:18:36.100063 kubelet[2619]: E0117 00:18:36.097152 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:18:36.100063 kubelet[2619]: E0117 00:18:36.097328 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4e7d527b0d249488a1c8abb4df7b11b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:36.100363 containerd[1466]: time="2026-01-17T00:18:36.100318856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.112 [WARNING][5138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b123c30-c4a1-486c-a4a2-f586dab5927b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa", Pod:"coredns-668d6bf9bc-kt8n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5064202bf06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.113 [INFO][5138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.113 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" iface="eth0" netns="" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.113 [INFO][5138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.113 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.168 [INFO][5146] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.169 [INFO][5146] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.171 [INFO][5146] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.190 [WARNING][5146] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.190 [INFO][5146] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.196 [INFO][5146] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:36.204210 containerd[1466]: 2026-01-17 00:18:36.200 [INFO][5138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.204210 containerd[1466]: time="2026-01-17T00:18:36.203729749Z" level=info msg="TearDown network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" successfully" Jan 17 00:18:36.204210 containerd[1466]: time="2026-01-17T00:18:36.203769632Z" level=info msg="StopPodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" returns successfully" Jan 17 00:18:36.207348 containerd[1466]: time="2026-01-17T00:18:36.205235290Z" level=info msg="RemovePodSandbox for \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\"" Jan 17 00:18:36.207348 containerd[1466]: time="2026-01-17T00:18:36.205293786Z" level=info msg="Forcibly stopping sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\"" Jan 17 00:18:36.282904 containerd[1466]: time="2026-01-17T00:18:36.282805552Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:36.285340 containerd[1466]: time="2026-01-17T00:18:36.284876810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:18:36.285576 containerd[1466]: time="2026-01-17T00:18:36.284933393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:18:36.285867 kubelet[2619]: E0117 00:18:36.285794 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:18:36.286030 kubelet[2619]: E0117 00:18:36.285891 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:18:36.286852 kubelet[2619]: E0117 00:18:36.286736 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:36.288668 kubelet[2619]: E0117 00:18:36.288594 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.317 [WARNING][5161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b123c30-c4a1-486c-a4a2-f586dab5927b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"afaf7e5416becc3765b02e21d04c80f1e262ccecd8c493be17d1d5b3178349aa", Pod:"coredns-668d6bf9bc-kt8n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5064202bf06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.319 [INFO][5161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.319 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" iface="eth0" netns="" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.319 [INFO][5161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.319 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.402 [INFO][5169] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.402 [INFO][5169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.403 [INFO][5169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.414 [WARNING][5169] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.414 [INFO][5169] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" HandleID="k8s-pod-network.cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--kt8n7-eth0" Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.418 [INFO][5169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:36.425852 containerd[1466]: 2026-01-17 00:18:36.421 [INFO][5161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5" Jan 17 00:18:36.426795 containerd[1466]: time="2026-01-17T00:18:36.425854464Z" level=info msg="TearDown network for sandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" successfully" Jan 17 00:18:36.434178 containerd[1466]: time="2026-01-17T00:18:36.433828632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:36.434178 containerd[1466]: time="2026-01-17T00:18:36.433948824Z" level=info msg="RemovePodSandbox \"cd5ba4d2760ebd1a068face780b8ba88931fd73865464c316c35baaffe0888e5\" returns successfully" Jan 17 00:18:36.437092 containerd[1466]: time="2026-01-17T00:18:36.437026655Z" level=info msg="StopPodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\"" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.556 [WARNING][5183] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a458d0c-d067-4f59-ad18-82fe02f35050", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6", Pod:"coredns-668d6bf9bc-wmqr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01385edea1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.557 [INFO][5183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.557 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" iface="eth0" netns="" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.557 [INFO][5183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.557 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.610 [INFO][5191] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.612 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.612 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.630 [WARNING][5191] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.630 [INFO][5191] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.633 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:36.643174 containerd[1466]: 2026-01-17 00:18:36.638 [INFO][5183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.643174 containerd[1466]: time="2026-01-17T00:18:36.643089305Z" level=info msg="TearDown network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" successfully" Jan 17 00:18:36.643174 containerd[1466]: time="2026-01-17T00:18:36.643149344Z" level=info msg="StopPodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" returns successfully" Jan 17 00:18:36.644364 containerd[1466]: time="2026-01-17T00:18:36.644296513Z" level=info msg="RemovePodSandbox for \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\"" Jan 17 00:18:36.644364 containerd[1466]: time="2026-01-17T00:18:36.644346103Z" level=info msg="Forcibly stopping sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\"" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.740 [WARNING][5206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a458d0c-d067-4f59-ad18-82fe02f35050", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"af5ff66250aa64d0ea1b2b93574ffc4bec24130c28a47ef0e889e08493d634d6", Pod:"coredns-668d6bf9bc-wmqr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01385edea1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.740 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.740 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" iface="eth0" netns="" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.740 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.741 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.789 [INFO][5214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.790 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.790 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.801 [WARNING][5214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.801 [INFO][5214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" HandleID="k8s-pod-network.0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-coredns--668d6bf9bc--wmqr2-eth0" Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.805 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:36.812359 containerd[1466]: 2026-01-17 00:18:36.808 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3" Jan 17 00:18:36.813403 containerd[1466]: time="2026-01-17T00:18:36.812575593Z" level=info msg="TearDown network for sandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" successfully" Jan 17 00:18:36.818740 containerd[1466]: time="2026-01-17T00:18:36.818660129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:36.818954 containerd[1466]: time="2026-01-17T00:18:36.818781607Z" level=info msg="RemovePodSandbox \"0ce8f8d53cbf40c10a6ccc3c87f42002509870e10d58ab4fb81099976968ade3\" returns successfully" Jan 17 00:18:36.819666 containerd[1466]: time="2026-01-17T00:18:36.819587787Z" level=info msg="StopPodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\"" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.905 [WARNING][5228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0", GenerateName:"calico-kube-controllers-d4d576bf5-", Namespace:"calico-system", SelfLink:"", UID:"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d4d576bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18", Pod:"calico-kube-controllers-d4d576bf5-8czh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2be5bae72c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.906 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.906 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" iface="eth0" netns="" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.906 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.906 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.955 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.957 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.957 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.969 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.969 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.973 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:36.979568 containerd[1466]: 2026-01-17 00:18:36.976 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:36.985178 containerd[1466]: time="2026-01-17T00:18:36.979920140Z" level=info msg="TearDown network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" successfully" Jan 17 00:18:36.985178 containerd[1466]: time="2026-01-17T00:18:36.982679454Z" level=info msg="StopPodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" returns successfully" Jan 17 00:18:36.985178 containerd[1466]: time="2026-01-17T00:18:36.983624311Z" level=info msg="RemovePodSandbox for \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\"" Jan 17 00:18:36.985178 containerd[1466]: time="2026-01-17T00:18:36.983682380Z" level=info msg="Forcibly stopping sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\"" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.063 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0", GenerateName:"calico-kube-controllers-d4d576bf5-", Namespace:"calico-system", SelfLink:"", UID:"c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d4d576bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-d68fed8960c0e46ae694", ContainerID:"c28e6ac3d20f72d554912c615cf8a84ee742d7bb055157a8ce15098a2debda18", Pod:"calico-kube-controllers-d4d576bf5-8czh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2be5bae72c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.063 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.063 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" iface="eth0" netns="" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.063 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.064 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.113 [INFO][5256] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.114 [INFO][5256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.114 [INFO][5256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.125 [WARNING][5256] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.125 [INFO][5256] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" HandleID="k8s-pod-network.e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Workload="ci--4081--3--6--nightly--20260116--2100--d68fed8960c0e46ae694-k8s-calico--kube--controllers--d4d576bf5--8czh9-eth0" Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.130 [INFO][5256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:18:37.135969 containerd[1466]: 2026-01-17 00:18:37.132 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8" Jan 17 00:18:37.136948 containerd[1466]: time="2026-01-17T00:18:37.136068631Z" level=info msg="TearDown network for sandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" successfully" Jan 17 00:18:37.159477 containerd[1466]: time="2026-01-17T00:18:37.159335230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:18:37.159755 containerd[1466]: time="2026-01-17T00:18:37.159552551Z" level=info msg="RemovePodSandbox \"e206360fa1b567ce83b41a325bf7a88ba33024a348eeb6c37644a425280938e8\" returns successfully" Jan 17 00:18:40.912740 containerd[1466]: time="2026-01-17T00:18:40.912670335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:18:41.086434 containerd[1466]: time="2026-01-17T00:18:41.086340194Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:41.088785 containerd[1466]: time="2026-01-17T00:18:41.088523575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:18:41.088785 containerd[1466]: time="2026-01-17T00:18:41.088700209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:41.091643 kubelet[2619]: E0117 00:18:41.090679 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:41.091643 kubelet[2619]: E0117 00:18:41.090764 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:41.091643 kubelet[2619]: E0117 00:18:41.090979 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ffcc6648d-969cl_calico-apiserver(0a073190-14cb-45b8-a9bf-4fd4665cfd04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:41.093163 kubelet[2619]: E0117 00:18:41.093015 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:18:41.918060 containerd[1466]: time="2026-01-17T00:18:41.916366997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:18:42.094658 containerd[1466]: time="2026-01-17T00:18:42.094570157Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:42.097694 containerd[1466]: time="2026-01-17T00:18:42.097415027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:18:42.097694 containerd[1466]: time="2026-01-17T00:18:42.097480010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:18:42.098098 kubelet[2619]: E0117 00:18:42.097876 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:18:42.098098 kubelet[2619]: E0117 00:18:42.097975 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:18:42.099142 kubelet[2619]: E0117 00:18:42.098201 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d4d576bf5-8czh9_calico-system(c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:42.100037 kubelet[2619]: E0117 00:18:42.099976 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:18:42.913793 containerd[1466]: time="2026-01-17T00:18:42.913704381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:18:43.081739 containerd[1466]: time="2026-01-17T00:18:43.081661676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:43.083666 containerd[1466]: time="2026-01-17T00:18:43.083547767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:18:43.083800 containerd[1466]: time="2026-01-17T00:18:43.083597707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:43.084040 kubelet[2619]: E0117 00:18:43.083958 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:43.084040 kubelet[2619]: E0117 00:18:43.084047 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:43.084942 kubelet[2619]: E0117 00:18:43.084385 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ffcc6648d-2jknj_calico-apiserver(3475ce39-a584-4708-980c-68f68b25eff1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:43.085400 containerd[1466]: time="2026-01-17T00:18:43.085316347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:18:43.085858 kubelet[2619]: E0117 00:18:43.085750 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:43.252388 containerd[1466]: time="2026-01-17T00:18:43.252148530Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:43.254472 containerd[1466]: time="2026-01-17T00:18:43.254214415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:18:43.254472 containerd[1466]: time="2026-01-17T00:18:43.254368686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:18:43.256573 kubelet[2619]: E0117 00:18:43.254898 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:18:43.256573 kubelet[2619]: E0117 00:18:43.254978 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:18:43.256573 kubelet[2619]: E0117 00:18:43.255172 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:43.260060 containerd[1466]: time="2026-01-17T00:18:43.259588077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:18:43.425800 containerd[1466]: time="2026-01-17T00:18:43.425517259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:43.428123 containerd[1466]: time="2026-01-17T00:18:43.427638750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:18:43.428123 containerd[1466]: time="2026-01-17T00:18:43.427857837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:18:43.430630 kubelet[2619]: E0117 00:18:43.428796 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:18:43.430630 kubelet[2619]: E0117 00:18:43.428881 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:18:43.430630 kubelet[2619]: E0117 00:18:43.429110 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:43.431378 kubelet[2619]: E0117 00:18:43.431263 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:43.940981 containerd[1466]: time="2026-01-17T00:18:43.940899832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:18:44.110893 containerd[1466]: time="2026-01-17T00:18:44.110766812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:44.113494 containerd[1466]: time="2026-01-17T00:18:44.112708019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:18:44.113494 containerd[1466]: time="2026-01-17T00:18:44.112879358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:44.113747 kubelet[2619]: E0117 00:18:44.113281 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:18:44.113747 kubelet[2619]: E0117 00:18:44.113370 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:18:44.115356 kubelet[2619]: E0117 00:18:44.114590 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm59t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sdfcr_calico-system(fe061b2a-805b-43bd-8451-203c834c880a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:44.116353 kubelet[2619]: E0117 00:18:44.116069 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:47.162979 systemd[1]: Started sshd@9-10.128.0.35:22-4.153.228.146:53960.service - OpenSSH per-connection server daemon (4.153.228.146:53960). Jan 17 00:18:47.409719 sshd[5276]: Accepted publickey for core from 4.153.228.146 port 53960 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:18:47.412261 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:47.424433 systemd-logind[1440]: New session 10 of user core. Jan 17 00:18:47.433880 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:18:47.837869 sshd[5276]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:47.848472 systemd[1]: sshd@9-10.128.0.35:22-4.153.228.146:53960.service: Deactivated successfully. Jan 17 00:18:47.859792 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:18:47.863610 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:18:47.868386 systemd-logind[1440]: Removed session 10. Jan 17 00:18:49.918933 kubelet[2619]: E0117 00:18:49.918844 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:18:52.896667 systemd[1]: Started sshd@10-10.128.0.35:22-4.153.228.146:53962.service - OpenSSH per-connection server daemon (4.153.228.146:53962). Jan 17 00:18:53.150835 sshd[5292]: Accepted publickey for core from 4.153.228.146 port 53962 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:18:53.153587 sshd[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:53.164825 systemd-logind[1440]: New session 11 of user core. Jan 17 00:18:53.170843 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:18:53.524294 sshd[5292]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:53.538010 systemd[1]: sshd@10-10.128.0.35:22-4.153.228.146:53962.service: Deactivated successfully. Jan 17 00:18:53.544185 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:18:53.547087 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:18:53.551182 systemd-logind[1440]: Removed session 11. Jan 17 00:18:56.916500 kubelet[2619]: E0117 00:18:56.914717 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:18:56.919845 kubelet[2619]: E0117 00:18:56.918043 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:18:56.919845 kubelet[2619]: E0117 00:18:56.918179 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:18:56.924209 kubelet[2619]: E0117 00:18:56.923004 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:18:56.924209 kubelet[2619]: E0117 00:18:56.923313 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:18:58.583716 systemd[1]: Started sshd@11-10.128.0.35:22-4.153.228.146:53436.service - OpenSSH per-connection server daemon (4.153.228.146:53436). Jan 17 00:18:58.830854 sshd[5331]: Accepted publickey for core from 4.153.228.146 port 53436 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:18:58.834929 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:58.847815 systemd-logind[1440]: New session 12 of user core. Jan 17 00:18:58.854963 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:18:59.177491 sshd[5331]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:59.185016 systemd[1]: sshd@11-10.128.0.35:22-4.153.228.146:53436.service: Deactivated successfully. Jan 17 00:18:59.191123 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:18:59.196602 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:18:59.199626 systemd-logind[1440]: Removed session 12. Jan 17 00:18:59.228730 systemd[1]: Started sshd@12-10.128.0.35:22-4.153.228.146:53450.service - OpenSSH per-connection server daemon (4.153.228.146:53450). Jan 17 00:18:59.482719 sshd[5346]: Accepted publickey for core from 4.153.228.146 port 53450 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:18:59.483856 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:59.500895 systemd-logind[1440]: New session 13 of user core. Jan 17 00:18:59.510720 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:18:59.919877 sshd[5346]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:59.931714 systemd[1]: sshd@12-10.128.0.35:22-4.153.228.146:53450.service: Deactivated successfully. Jan 17 00:18:59.932688 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:18:59.942244 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:18:59.951082 systemd-logind[1440]: Removed session 13. Jan 17 00:18:59.982776 systemd[1]: Started sshd@13-10.128.0.35:22-4.153.228.146:53456.service - OpenSSH per-connection server daemon (4.153.228.146:53456). Jan 17 00:19:00.238670 sshd[5358]: Accepted publickey for core from 4.153.228.146 port 53456 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:00.243813 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:00.260127 systemd-logind[1440]: New session 14 of user core. Jan 17 00:19:00.268877 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:19:00.600890 sshd[5358]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:00.611139 systemd[1]: sshd@13-10.128.0.35:22-4.153.228.146:53456.service: Deactivated successfully. Jan 17 00:19:00.619003 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:19:00.624629 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:19:00.630225 systemd-logind[1440]: Removed session 14. Jan 17 00:19:00.913695 containerd[1466]: time="2026-01-17T00:19:00.913212624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:19:01.101980 containerd[1466]: time="2026-01-17T00:19:01.101282641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:01.104103 containerd[1466]: time="2026-01-17T00:19:01.103850869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:19:01.104103 containerd[1466]: time="2026-01-17T00:19:01.104016557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:19:01.104765 kubelet[2619]: E0117 00:19:01.104688 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:19:01.105901 kubelet[2619]: E0117 00:19:01.104787 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:19:01.105901 kubelet[2619]: E0117 00:19:01.104976 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4e7d527b0d249488a1c8abb4df7b11b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:01.110561 containerd[1466]: time="2026-01-17T00:19:01.110408057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:19:01.278364 containerd[1466]: time="2026-01-17T00:19:01.277865535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:01.280902 containerd[1466]: time="2026-01-17T00:19:01.280360247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:19:01.280902 containerd[1466]: time="2026-01-17T00:19:01.280559330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:19:01.282201 kubelet[2619]: E0117 00:19:01.281370 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:19:01.282201 kubelet[2619]: E0117 00:19:01.281477 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:19:01.282201 kubelet[2619]: E0117 00:19:01.281654 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:01.282976 kubelet[2619]: E0117 00:19:01.282880 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:19:05.656115 systemd[1]: Started sshd@14-10.128.0.35:22-4.153.228.146:38354.service - OpenSSH per-connection server daemon (4.153.228.146:38354). Jan 17 00:19:05.925298 sshd[5377]: Accepted publickey for core from 4.153.228.146 port 38354 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:05.927359 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:05.941804 systemd-logind[1440]: New session 15 of user core. Jan 17 00:19:05.947884 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:19:06.268652 sshd[5377]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:06.278209 systemd[1]: sshd@14-10.128.0.35:22-4.153.228.146:38354.service: Deactivated successfully. Jan 17 00:19:06.287311 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:19:06.293974 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:19:06.296862 systemd-logind[1440]: Removed session 15. Jan 17 00:19:08.914266 containerd[1466]: time="2026-01-17T00:19:08.913812104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:19:09.092277 containerd[1466]: time="2026-01-17T00:19:09.091905340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:09.094683 containerd[1466]: time="2026-01-17T00:19:09.094310145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:19:09.094683 containerd[1466]: time="2026-01-17T00:19:09.094500311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:19:09.095772 kubelet[2619]: E0117 00:19:09.095691 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:19:09.095772 kubelet[2619]: E0117 00:19:09.095789 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:19:09.097013 kubelet[2619]: E0117 00:19:09.096005 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:09.099972 containerd[1466]: time="2026-01-17T00:19:09.099906707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:19:09.269617 containerd[1466]: time="2026-01-17T00:19:09.269334329Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:09.271634 containerd[1466]: time="2026-01-17T00:19:09.271477264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:19:09.271634 containerd[1466]: time="2026-01-17T00:19:09.271608902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:19:09.272294 kubelet[2619]: E0117 00:19:09.272213 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:19:09.272437 kubelet[2619]: E0117 00:19:09.272312 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:19:09.273722 kubelet[2619]: E0117 00:19:09.272548 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:09.274475 kubelet[2619]: E0117 00:19:09.274381 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:19:09.923663 containerd[1466]: time="2026-01-17T00:19:09.923229037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:19:10.112719 containerd[1466]: time="2026-01-17T00:19:10.112376974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:10.114577 containerd[1466]: time="2026-01-17T00:19:10.114279568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:19:10.114577 containerd[1466]: time="2026-01-17T00:19:10.114487210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:19:10.114868 kubelet[2619]: E0117 00:19:10.114742 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:19:10.114868 kubelet[2619]: E0117 00:19:10.114828 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:19:10.115599 kubelet[2619]: E0117 00:19:10.115184 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm59t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sdfcr_calico-system(fe061b2a-805b-43bd-8451-203c834c880a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:10.118075 kubelet[2619]: E0117 00:19:10.117086 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:19:10.118211 containerd[1466]: time="2026-01-17T00:19:10.117493531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:19:10.281029 containerd[1466]: time="2026-01-17T00:19:10.280792371Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:10.288764 containerd[1466]: time="2026-01-17T00:19:10.288560348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:19:10.289027 containerd[1466]: time="2026-01-17T00:19:10.288845690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:19:10.289259 kubelet[2619]: E0117 00:19:10.289157 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:19:10.289638 kubelet[2619]: E0117 00:19:10.289255 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:19:10.290291 kubelet[2619]: E0117 00:19:10.289655 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ffcc6648d-969cl_calico-apiserver(0a073190-14cb-45b8-a9bf-4fd4665cfd04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:10.292164 kubelet[2619]: E0117 00:19:10.290862 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:19:10.292420 containerd[1466]: time="2026-01-17T00:19:10.291244010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:19:10.465548 containerd[1466]: time="2026-01-17T00:19:10.465393053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:10.468207 containerd[1466]: time="2026-01-17T00:19:10.467854593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:19:10.468207 containerd[1466]: time="2026-01-17T00:19:10.468014500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:19:10.469597 kubelet[2619]: E0117 00:19:10.468718 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:19:10.469597 kubelet[2619]: E0117 00:19:10.468815 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:19:10.469597 kubelet[2619]: E0117 00:19:10.469010 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-828hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ffcc6648d-2jknj_calico-apiserver(3475ce39-a584-4708-980c-68f68b25eff1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:10.470530 kubelet[2619]: E0117 00:19:10.470259 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:19:11.325023 systemd[1]: Started sshd@15-10.128.0.35:22-4.153.228.146:38360.service - OpenSSH per-connection server daemon (4.153.228.146:38360). Jan 17 00:19:11.591705 sshd[5394]: Accepted publickey for core from 4.153.228.146 port 38360 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:11.596402 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:11.610148 systemd-logind[1440]: New session 16 of user core. Jan 17 00:19:11.621236 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:19:11.917618 containerd[1466]: time="2026-01-17T00:19:11.917550236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:19:11.948937 sshd[5394]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:11.961006 systemd[1]: sshd@15-10.128.0.35:22-4.153.228.146:38360.service: Deactivated successfully. Jan 17 00:19:11.967139 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:19:11.973627 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:19:11.976899 systemd-logind[1440]: Removed session 16. Jan 17 00:19:12.104754 containerd[1466]: time="2026-01-17T00:19:12.104684579Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:12.107322 containerd[1466]: time="2026-01-17T00:19:12.107156160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:19:12.107607 containerd[1466]: time="2026-01-17T00:19:12.107388883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:19:12.108722 kubelet[2619]: E0117 00:19:12.107868 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:19:12.108722 kubelet[2619]: E0117 00:19:12.107964 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:19:12.108722 kubelet[2619]: E0117 00:19:12.108176 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d4d576bf5-8czh9_calico-system(c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:12.111246 kubelet[2619]: E0117 00:19:12.109658 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:19:15.917747 kubelet[2619]: E0117 00:19:15.917659 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:19:17.007670 systemd[1]: Started sshd@16-10.128.0.35:22-4.153.228.146:50612.service - OpenSSH per-connection server daemon (4.153.228.146:50612). Jan 17 00:19:17.281764 sshd[5411]: Accepted publickey for core from 4.153.228.146 port 50612 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:17.283654 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:17.293032 systemd-logind[1440]: New session 17 of user core. Jan 17 00:19:17.300892 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:19:17.619770 sshd[5411]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:17.630251 systemd[1]: sshd@16-10.128.0.35:22-4.153.228.146:50612.service: Deactivated successfully. Jan 17 00:19:17.631397 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:19:17.637305 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:19:17.639934 systemd-logind[1440]: Removed session 17. Jan 17 00:19:21.917772 kubelet[2619]: E0117 00:19:21.917682 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:19:21.922625 kubelet[2619]: E0117 00:19:21.922544 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:19:22.677733 systemd[1]: Started sshd@17-10.128.0.35:22-4.153.228.146:50616.service - OpenSSH per-connection server daemon (4.153.228.146:50616). Jan 17 00:19:22.914673 kubelet[2619]: E0117 00:19:22.914587 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:19:22.929430 sshd[5424]: Accepted publickey for core from 4.153.228.146 port 50616 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:22.931479 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:22.941855 systemd-logind[1440]: New session 18 of user core. Jan 17 00:19:22.952168 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:19:23.268902 sshd[5424]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:23.282321 systemd[1]: sshd@17-10.128.0.35:22-4.153.228.146:50616.service: Deactivated successfully. Jan 17 00:19:23.292274 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:19:23.294100 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:19:23.330050 systemd[1]: Started sshd@18-10.128.0.35:22-4.153.228.146:50628.service - OpenSSH per-connection server daemon (4.153.228.146:50628). Jan 17 00:19:23.333654 systemd-logind[1440]: Removed session 18. Jan 17 00:19:23.599339 sshd[5437]: Accepted publickey for core from 4.153.228.146 port 50628 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:23.604911 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:23.620611 systemd-logind[1440]: New session 19 of user core. Jan 17 00:19:23.626011 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:19:24.059949 sshd[5437]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:24.076136 systemd[1]: sshd@18-10.128.0.35:22-4.153.228.146:50628.service: Deactivated successfully. Jan 17 00:19:24.087752 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:19:24.094163 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:19:24.121677 systemd[1]: Started sshd@19-10.128.0.35:22-4.153.228.146:50644.service - OpenSSH per-connection server daemon (4.153.228.146:50644). Jan 17 00:19:24.123599 systemd-logind[1440]: Removed session 19. Jan 17 00:19:24.371979 sshd[5448]: Accepted publickey for core from 4.153.228.146 port 50644 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:24.376064 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:24.386903 systemd-logind[1440]: New session 20 of user core. Jan 17 00:19:24.396819 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:19:24.916500 kubelet[2619]: E0117 00:19:24.914546 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:19:25.677645 sshd[5448]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:25.692666 systemd[1]: sshd@19-10.128.0.35:22-4.153.228.146:50644.service: Deactivated successfully. Jan 17 00:19:25.693674 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:19:25.707072 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:19:25.735111 systemd-logind[1440]: Removed session 20. Jan 17 00:19:25.736438 systemd[1]: Started sshd@20-10.128.0.35:22-4.153.228.146:60814.service - OpenSSH per-connection server daemon (4.153.228.146:60814). Jan 17 00:19:25.921507 kubelet[2619]: E0117 00:19:25.921406 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:19:26.018427 sshd[5464]: Accepted publickey for core from 4.153.228.146 port 60814 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:26.021716 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:26.040929 systemd-logind[1440]: New session 21 of user core. Jan 17 00:19:26.051376 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:19:26.787894 sshd[5464]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:26.802560 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:19:26.803894 systemd[1]: sshd@20-10.128.0.35:22-4.153.228.146:60814.service: Deactivated successfully. Jan 17 00:19:26.812670 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:19:26.839627 systemd-logind[1440]: Removed session 21. Jan 17 00:19:26.847700 systemd[1]: Started sshd@21-10.128.0.35:22-4.153.228.146:60816.service - OpenSSH per-connection server daemon (4.153.228.146:60816). Jan 17 00:19:27.107982 sshd[5477]: Accepted publickey for core from 4.153.228.146 port 60816 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:27.108994 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:27.121395 systemd-logind[1440]: New session 22 of user core. Jan 17 00:19:27.127393 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:19:27.452648 sshd[5477]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:27.463778 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:19:27.465130 systemd[1]: sshd@21-10.128.0.35:22-4.153.228.146:60816.service: Deactivated successfully. Jan 17 00:19:27.473739 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:19:27.482234 systemd-logind[1440]: Removed session 22. Jan 17 00:19:27.581298 systemd[1]: run-containerd-runc-k8s.io-dba7f3859a25c596918269d4509723d11371e753f35eb66ef9603448e2f97d6f-runc.oSsiMm.mount: Deactivated successfully. Jan 17 00:19:29.914745 kubelet[2619]: E0117 00:19:29.914669 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:19:32.507564 systemd[1]: Started sshd@22-10.128.0.35:22-4.153.228.146:60820.service - OpenSSH per-connection server daemon (4.153.228.146:60820). Jan 17 00:19:32.762722 sshd[5513]: Accepted publickey for core from 4.153.228.146 port 60820 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:32.769667 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:32.786142 systemd-logind[1440]: New session 23 of user core. Jan 17 00:19:32.797411 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:19:33.107124 sshd[5513]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:33.120940 systemd[1]: sshd@22-10.128.0.35:22-4.153.228.146:60820.service: Deactivated successfully. Jan 17 00:19:33.128241 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:19:33.135568 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:19:33.139674 systemd-logind[1440]: Removed session 23. Jan 17 00:19:34.913746 kubelet[2619]: E0117 00:19:34.913492 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:19:35.914064 kubelet[2619]: E0117 00:19:35.913475 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:19:36.917492 kubelet[2619]: E0117 00:19:36.916761 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625" Jan 17 00:19:36.917492 kubelet[2619]: E0117 00:19:36.916972 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:19:38.163110 systemd[1]: Started sshd@23-10.128.0.35:22-4.153.228.146:33982.service - OpenSSH per-connection server daemon (4.153.228.146:33982). Jan 17 00:19:38.431678 sshd[5530]: Accepted publickey for core from 4.153.228.146 port 33982 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:38.435992 sshd[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:38.447152 systemd-logind[1440]: New session 24 of user core. Jan 17 00:19:38.459429 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:19:38.797235 sshd[5530]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:38.808668 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:19:38.810344 systemd[1]: sshd@23-10.128.0.35:22-4.153.228.146:33982.service: Deactivated successfully. Jan 17 00:19:38.817291 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:19:38.824997 systemd-logind[1440]: Removed session 24. Jan 17 00:19:39.917132 kubelet[2619]: E0117 00:19:39.916696 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d4d576bf5-8czh9" podUID="c361c6e8-c3e0-4b7a-8e22-dd558d5fdb2e" Jan 17 00:19:42.913183 containerd[1466]: time="2026-01-17T00:19:42.913120212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:19:43.082736 containerd[1466]: time="2026-01-17T00:19:43.082633839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:43.084555 containerd[1466]: time="2026-01-17T00:19:43.084398985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:19:43.084555 containerd[1466]: time="2026-01-17T00:19:43.084406999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:19:43.085376 kubelet[2619]: E0117 00:19:43.085299 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:19:43.087090 kubelet[2619]: E0117 00:19:43.085403 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:19:43.087090 kubelet[2619]: E0117 00:19:43.085645 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4e7d527b0d249488a1c8abb4df7b11b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:43.089003 containerd[1466]: time="2026-01-17T00:19:43.088947554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:19:43.257013 containerd[1466]: time="2026-01-17T00:19:43.256760842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:43.258871 containerd[1466]: time="2026-01-17T00:19:43.258753093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:19:43.259117 containerd[1466]: time="2026-01-17T00:19:43.258871506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:19:43.259370 kubelet[2619]: E0117 00:19:43.259290 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:19:43.259503 kubelet[2619]: E0117 00:19:43.259394 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:19:43.261491 kubelet[2619]: E0117 00:19:43.259687 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787bd9bbf8-qw494_calico-system(6f99b557-3ce0-4f99-bb06-f6d4f3390790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:43.261868 kubelet[2619]: E0117 00:19:43.261680 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787bd9bbf8-qw494" podUID="6f99b557-3ce0-4f99-bb06-f6d4f3390790" Jan 17 00:19:43.850118 systemd[1]: Started sshd@24-10.128.0.35:22-4.153.228.146:33990.service - OpenSSH per-connection server daemon (4.153.228.146:33990). Jan 17 00:19:44.116785 sshd[5549]: Accepted publickey for core from 4.153.228.146 port 33990 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:44.120776 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:44.138730 systemd-logind[1440]: New session 25 of user core. Jan 17 00:19:44.144904 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:19:44.457659 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:44.468159 systemd[1]: sshd@24-10.128.0.35:22-4.153.228.146:33990.service: Deactivated successfully. Jan 17 00:19:44.469036 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:19:44.477028 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:19:44.483131 systemd-logind[1440]: Removed session 25. Jan 17 00:19:46.912761 kubelet[2619]: E0117 00:19:46.912669 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sdfcr" podUID="fe061b2a-805b-43bd-8451-203c834c880a" Jan 17 00:19:47.912976 kubelet[2619]: E0117 00:19:47.912565 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-2jknj" podUID="3475ce39-a584-4708-980c-68f68b25eff1" Jan 17 00:19:48.911894 kubelet[2619]: E0117 00:19:48.911813 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ffcc6648d-969cl" podUID="0a073190-14cb-45b8-a9bf-4fd4665cfd04" Jan 17 00:19:49.511753 systemd[1]: Started sshd@25-10.128.0.35:22-4.153.228.146:59656.service - OpenSSH per-connection server daemon (4.153.228.146:59656). Jan 17 00:19:49.765923 sshd[5569]: Accepted publickey for core from 4.153.228.146 port 59656 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:19:49.772784 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:49.787793 systemd-logind[1440]: New session 26 of user core. Jan 17 00:19:49.795394 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:19:49.920404 containerd[1466]: time="2026-01-17T00:19:49.918762987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:19:50.100255 containerd[1466]: time="2026-01-17T00:19:50.099864303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:50.104393 containerd[1466]: time="2026-01-17T00:19:50.103869306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:19:50.104863 containerd[1466]: time="2026-01-17T00:19:50.104428970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:19:50.105276 kubelet[2619]: E0117 00:19:50.104664 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:19:50.105276 kubelet[2619]: E0117 00:19:50.104744 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:19:50.105276 kubelet[2619]: E0117 00:19:50.104940 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:50.111236 containerd[1466]: time="2026-01-17T00:19:50.111159062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:19:50.162850 sshd[5569]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:50.173963 systemd[1]: sshd@25-10.128.0.35:22-4.153.228.146:59656.service: Deactivated successfully. Jan 17 00:19:50.174573 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:19:50.184158 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:19:50.191353 systemd-logind[1440]: Removed session 26. Jan 17 00:19:50.288758 containerd[1466]: time="2026-01-17T00:19:50.288390190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:19:50.290826 containerd[1466]: time="2026-01-17T00:19:50.290268145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:19:50.290826 containerd[1466]: time="2026-01-17T00:19:50.290409156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:19:50.291873 kubelet[2619]: E0117 00:19:50.291465 2619 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:19:50.291873 kubelet[2619]: E0117 00:19:50.291554 2619 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:19:50.291873 kubelet[2619]: E0117 00:19:50.291778 2619 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-splcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxqn4_calico-system(05d5b250-556f-4421-995f-92aeade92625): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:19:50.293103 kubelet[2619]: E0117 00:19:50.293004 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxqn4" podUID="05d5b250-556f-4421-995f-92aeade92625"