Jan 17 00:14:50.076232 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:14:50.076276 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:50.076293 kernel: BIOS-provided physical RAM map: Jan 17 00:14:50.076307 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:14:50.076320 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:14:50.076334 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:14:50.076351 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:14:50.076370 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:14:50.076384 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:14:50.076399 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:14:50.076414 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:14:50.076428 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:14:50.076443 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:14:50.076457 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:14:50.076479 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:14:50.076495 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:14:50.076512 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:14:50.076536 kernel: NX (Execute Disable) protection: active Jan 17 00:14:50.076552 kernel: APIC: Static calls initialized Jan 17 00:14:50.076568 kernel: efi: EFI v2.7 by EDK II Jan 17 00:14:50.076585 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:14:50.076602 kernel: SMBIOS 2.4 present. Jan 17 00:14:50.076618 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:14:50.076633 kernel: Hypervisor detected: KVM Jan 17 00:14:50.076652 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:14:50.076667 kernel: kvm-clock: using sched offset of 12503798322 cycles Jan 17 00:14:50.076685 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:14:50.076702 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:14:50.076718 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:14:50.076735 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:14:50.076752 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:14:50.076768 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:14:50.076785 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:14:50.076805 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:14:50.076822 kernel: Using GB pages for direct mapping Jan 17 00:14:50.076839 kernel: Secure boot disabled Jan 17 00:14:50.076856 kernel: ACPI: Early table checksum verification disabled Jan 17 00:14:50.076873 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:14:50.076913 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:14:50.076931 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:14:50.076955 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:14:50.076976 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:14:50.076994 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:14:50.077012 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:14:50.077030 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:14:50.077047 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:14:50.077063 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:14:50.077085 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:14:50.077103 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:14:50.077120 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:14:50.077137 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:14:50.077155 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:14:50.077172 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:14:50.077189 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:14:50.077207 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:14:50.077224 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:14:50.077246 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:14:50.077264 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:14:50.077282 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:14:50.077300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:14:50.077318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:14:50.077336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:14:50.077354 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:14:50.077372 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:14:50.077391 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 17 00:14:50.077413 kernel: Zone ranges: Jan 17 00:14:50.077431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:14:50.077450 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:14:50.077468 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:14:50.077487 kernel: Movable zone start for each node Jan 17 00:14:50.077505 kernel: Early memory node ranges Jan 17 00:14:50.077523 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:14:50.077547 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:14:50.077565 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:14:50.077587 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:14:50.077605 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:14:50.077624 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:14:50.077642 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:14:50.077660 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:14:50.077678 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:14:50.077697 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:14:50.077715 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:14:50.077733 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:14:50.077751 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:14:50.077774 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:14:50.077793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:14:50.077811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:14:50.077830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:14:50.077848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:14:50.077867 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:14:50.077896 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:14:50.077927 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:14:50.077950 kernel: Booting paravirtualized kernel on KVM Jan 17 00:14:50.077968 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:14:50.077987 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:14:50.078005 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:14:50.078024 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:14:50.078040 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:14:50.078058 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:14:50.078077 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:14:50.078097 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:50.078120 kernel: random: crng init done Jan 17 00:14:50.078138 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:14:50.078157 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:14:50.078175 kernel: Fallback order for Node 0: 0 Jan 17 00:14:50.078193 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:14:50.078212 kernel: Policy zone: Normal Jan 17 00:14:50.078231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:14:50.078249 kernel: software IO TLB: area num 2. Jan 17 00:14:50.078268 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347140K reserved, 0K cma-reserved) Jan 17 00:14:50.078289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:14:50.078308 kernel: Kernel/User page tables isolation: enabled Jan 17 00:14:50.078326 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:14:50.078344 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:14:50.078362 kernel: Dynamic Preempt: voluntary Jan 17 00:14:50.078380 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:14:50.078399 kernel: rcu: RCU event tracing is enabled. Jan 17 00:14:50.078419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:14:50.078454 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:14:50.078472 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:14:50.078491 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:14:50.078510 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:14:50.078540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:14:50.078559 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:14:50.078577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:14:50.078596 kernel: Console: colour dummy device 80x25 Jan 17 00:14:50.078619 kernel: printk: console [ttyS0] enabled Jan 17 00:14:50.078638 kernel: ACPI: Core revision 20230628 Jan 17 00:14:50.078656 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:14:50.078673 kernel: x2apic enabled Jan 17 00:14:50.078692 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:14:50.078712 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:14:50.078732 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:14:50.078752 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:14:50.078771 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:14:50.078795 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:14:50.078814 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:14:50.078832 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:14:50.078851 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:14:50.078871 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:14:50.078905 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:14:50.078922 kernel: RETBleed: Mitigation: IBRS Jan 17 00:14:50.078937 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:14:50.078954 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:14:50.078975 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:14:50.078991 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:14:50.079007 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:14:50.079023 kernel: active return thunk: its_return_thunk Jan 17 00:14:50.079040 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:14:50.079057 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:14:50.079075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:14:50.079093 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:14:50.079112 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:14:50.079135 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:14:50.079153 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:14:50.079171 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:14:50.079190 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:14:50.079207 kernel: landlock: Up and running. Jan 17 00:14:50.079245 kernel: SELinux: Initializing. Jan 17 00:14:50.079265 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.079283 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.079303 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:14:50.079327 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:14:50.079345 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:14:50.079364 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:14:50.079383 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:14:50.079401 kernel: signal: max sigframe size: 1776 Jan 17 00:14:50.079419 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:14:50.079438 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:14:50.079457 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:14:50.079475 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:14:50.079497 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:14:50.079516 kernel: .... node #0, CPUs: #1 Jan 17 00:14:50.079543 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:14:50.079563 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:14:50.079582 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:14:50.079600 kernel: smpboot: Max logical packages: 1 Jan 17 00:14:50.079618 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:14:50.079636 kernel: devtmpfs: initialized Jan 17 00:14:50.079659 kernel: x86/mm: Memory block size: 128MB Jan 17 00:14:50.079678 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:14:50.079697 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:14:50.079715 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:14:50.079733 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:14:50.079752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:14:50.079770 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:14:50.079788 kernel: audit: type=2000 audit(1768608888.893:1): state=initialized audit_enabled=0 res=1 Jan 17 00:14:50.079806 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:14:50.079828 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:14:50.079846 kernel: cpuidle: using governor menu Jan 17 00:14:50.079865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:14:50.079897 kernel: dca service started, version 1.12.1 Jan 17 00:14:50.079916 kernel: PCI: Using configuration type 1 for base access Jan 17 00:14:50.079935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:14:50.079953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:14:50.079972 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:14:50.079990 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:14:50.080013 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:14:50.080031 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:14:50.080050 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:14:50.080069 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:14:50.080087 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:14:50.080106 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:14:50.080124 kernel: ACPI: Interpreter enabled Jan 17 00:14:50.080142 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:14:50.080160 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:14:50.080182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:14:50.080201 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:14:50.080220 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:14:50.080238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:14:50.080511 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:14:50.080725 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:14:50.080959 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:14:50.080993 kernel: PCI host bridge to bus 0000:00 Jan 17 00:14:50.081185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:14:50.081361 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:14:50.081540 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:14:50.081709 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:14:50.081878 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:14:50.082116 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:14:50.082330 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:14:50.082535 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:14:50.082729 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:14:50.082950 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:14:50.083158 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:14:50.083358 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:14:50.083579 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:14:50.083774 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:14:50.084034 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:14:50.084238 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:14:50.084432 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:14:50.084639 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:14:50.084666 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:14:50.084693 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:14:50.084711 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:14:50.084731 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:14:50.084751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:14:50.084771 kernel: iommu: Default domain type: Translated Jan 17 00:14:50.084791 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:14:50.084810 kernel: efivars: Registered efivars operations Jan 17 00:14:50.084829 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:14:50.084850 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:14:50.084869 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:14:50.084908 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:14:50.084927 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:14:50.084946 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:14:50.084965 kernel: vgaarb: loaded Jan 17 00:14:50.084986 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:14:50.085006 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:14:50.085025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:14:50.085044 kernel: pnp: PnP ACPI init Jan 17 00:14:50.085064 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:14:50.085089 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:14:50.085109 kernel: NET: Registered PF_INET protocol family Jan 17 00:14:50.085129 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:14:50.085148 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:14:50.085167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:14:50.085187 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:14:50.085207 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:14:50.085226 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:14:50.085250 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.085269 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.085288 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:14:50.085308 kernel: NET: Registered PF_XDP protocol family Jan 17 00:14:50.085490 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:14:50.085671 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:14:50.085841 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:14:50.086036 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:14:50.086237 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:14:50.086264 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:14:50.086283 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:14:50.086301 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:14:50.086319 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:14:50.086340 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:14:50.086360 kernel: clocksource: Switched to clocksource tsc Jan 17 00:14:50.086380 kernel: Initialise system trusted keyrings Jan 17 00:14:50.086404 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:14:50.086424 kernel: Key type asymmetric registered Jan 17 00:14:50.086442 kernel: Asymmetric key parser 'x509' registered Jan 17 00:14:50.086461 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:14:50.086480 kernel: io scheduler mq-deadline registered Jan 17 00:14:50.086500 kernel: io scheduler kyber registered Jan 17 00:14:50.086520 kernel: io scheduler bfq registered Jan 17 00:14:50.086545 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:14:50.086565 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:14:50.086800 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:14:50.086825 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:14:50.087039 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:14:50.087064 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:14:50.087243 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:14:50.087267 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:14:50.087285 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:14:50.087304 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:14:50.087322 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:14:50.087348 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:14:50.087541 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:14:50.087567 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:14:50.087586 kernel: i8042: Warning: Keylock active Jan 17 00:14:50.087604 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:14:50.087622 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:14:50.087809 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:14:50.088016 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:14:50.088188 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:14:49 UTC (1768608889) Jan 17 00:14:50.088355 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:14:50.088378 kernel: intel_pstate: CPU model not supported Jan 17 00:14:50.088397 kernel: pstore: Using crash dump compression: deflate Jan 17 00:14:50.088415 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:14:50.088433 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:14:50.088452 kernel: Segment Routing with IPv6 Jan 17 00:14:50.088471 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:14:50.088495 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:14:50.088514 kernel: Key type dns_resolver registered Jan 17 00:14:50.088540 kernel: IPI shorthand broadcast: enabled Jan 17 00:14:50.088559 kernel: sched_clock: Marking stable (816003794, 124513037)->(952176644, -11659813) Jan 17 00:14:50.088577 kernel: registered taskstats version 1 Jan 17 00:14:50.088596 kernel: Loading compiled-in X.509 certificates Jan 17 00:14:50.088614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:14:50.088632 kernel: Key type .fscrypt registered Jan 17 00:14:50.088650 kernel: Key type fscrypt-provisioning registered Jan 17 00:14:50.088672 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:14:50.088691 kernel: ima: No architecture policies found Jan 17 00:14:50.088709 kernel: clk: Disabling unused clocks Jan 17 00:14:50.088728 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:14:50.088746 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:14:50.088765 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:14:50.088784 kernel: Run /init as init process Jan 17 00:14:50.088802 kernel: with arguments: Jan 17 00:14:50.088820 kernel: /init Jan 17 00:14:50.088842 kernel: with environment: Jan 17 00:14:50.088860 kernel: HOME=/ Jan 17 00:14:50.088878 kernel: TERM=linux Jan 17 00:14:50.088933 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:14:50.088956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:14:50.088978 systemd[1]: Detected virtualization google. Jan 17 00:14:50.088998 systemd[1]: Detected architecture x86-64. Jan 17 00:14:50.089021 systemd[1]: Running in initrd. Jan 17 00:14:50.089040 systemd[1]: No hostname configured, using default hostname. Jan 17 00:14:50.089059 systemd[1]: Hostname set to . Jan 17 00:14:50.089079 systemd[1]: Initializing machine ID from random generator. Jan 17 00:14:50.089098 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:14:50.089117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:50.089137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:50.089157 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:14:50.089181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:14:50.089200 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:14:50.089220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:14:50.089242 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:14:50.089261 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:14:50.089281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:50.089301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:50.089324 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:14:50.089344 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:14:50.089385 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:14:50.089409 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:14:50.089428 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:14:50.089448 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:14:50.089472 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:14:50.089493 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:14:50.089513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:50.089539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:50.089560 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:50.089580 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:14:50.089600 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:14:50.089620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:14:50.089640 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:14:50.089664 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:14:50.089684 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:14:50.089704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:14:50.089724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:50.089744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:14:50.089795 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:14:50.089841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:50.089861 systemd-journald[184]: Journal started Jan 17 00:14:50.089913 systemd-journald[184]: Runtime Journal (/run/log/journal/16d69938261d4452a3b9cd7656c7b697) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:14:50.095445 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:14:50.108046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:14:50.099482 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:14:50.123442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:14:50.125582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:14:50.133862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:50.142162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:50.147998 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:14:50.145021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:50.152151 kernel: Bridge firewalling registered Jan 17 00:14:50.150976 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:14:50.156484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:50.171093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:14:50.182078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:14:50.183954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:50.206094 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:50.214283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:50.218236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:50.228105 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:14:50.231320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:14:50.264193 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:14:50.269026 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:50.292789 systemd-resolved[217]: Positive Trust Anchors: Jan 17 00:14:50.293221 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:14:50.293303 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:14:50.298491 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 17 00:14:50.300290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:14:50.323108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:50.375926 kernel: SCSI subsystem initialized Jan 17 00:14:50.387935 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:14:50.399929 kernel: iscsi: registered transport (tcp) Jan 17 00:14:50.423964 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:14:50.424028 kernel: QLogic iSCSI HBA Driver Jan 17 00:14:50.475914 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:14:50.483114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:14:50.522130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:14:50.522205 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:14:50.522242 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:14:50.567934 kernel: raid6: avx2x4 gen() 18130 MB/s Jan 17 00:14:50.584914 kernel: raid6: avx2x2 gen() 18208 MB/s Jan 17 00:14:50.602279 kernel: raid6: avx2x1 gen() 13957 MB/s Jan 17 00:14:50.602315 kernel: raid6: using algorithm avx2x2 gen() 18208 MB/s Jan 17 00:14:50.620297 kernel: raid6: .... xor() 17931 MB/s, rmw enabled Jan 17 00:14:50.620339 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:14:50.642920 kernel: xor: automatically using best checksumming function avx Jan 17 00:14:50.815933 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:14:50.829146 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:14:50.836140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:50.867200 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 17 00:14:50.874088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:50.885820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:14:50.916174 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 00:14:50.952462 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:14:50.966070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:14:51.046526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:51.059132 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:14:51.105079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:14:51.111744 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:14:51.115993 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:51.119988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:14:51.128094 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:14:51.157143 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:14:51.157237 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:14:51.191023 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:14:51.210052 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:14:51.219058 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:14:51.262219 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:14:51.262288 kernel: AES CTR mode by8 optimization enabled Jan 17 00:14:51.261771 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:14:51.261999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:51.266291 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:51.280965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:51.293000 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:14:51.293320 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:14:51.281188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:51.285025 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:51.298705 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:14:51.299926 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:14:51.300678 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:14:51.298593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:51.309911 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:14:51.309959 kernel: GPT:17805311 != 33554431 Jan 17 00:14:51.309992 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:14:51.310016 kernel: GPT:17805311 != 33554431 Jan 17 00:14:51.311202 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:14:51.311251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:51.313003 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:14:51.322937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:51.336192 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:51.383956 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (451) Jan 17 00:14:51.383663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:51.396035 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (461) Jan 17 00:14:51.405914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:14:51.420770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:14:51.436735 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:14:51.440990 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:14:51.452666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:14:51.463144 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:14:51.474729 disk-uuid[551]: Primary Header is updated. Jan 17 00:14:51.474729 disk-uuid[551]: Secondary Entries is updated. Jan 17 00:14:51.474729 disk-uuid[551]: Secondary Header is updated. Jan 17 00:14:51.485929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:51.502917 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:51.509991 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:52.510679 disk-uuid[552]: The operation has completed successfully. Jan 17 00:14:52.516062 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:52.588217 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:14:52.588375 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:14:52.605074 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:14:52.636205 sh[569]: Success Jan 17 00:14:52.657096 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:14:52.730337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:14:52.736968 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:14:52.763351 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:14:52.805464 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:14:52.805525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:52.805552 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:14:52.821851 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:14:52.821907 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:14:52.855927 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:14:52.861627 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:14:52.862609 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:14:52.868069 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:14:52.942294 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:52.942324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:52.942341 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:14:52.942355 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:14:52.942370 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:14:52.899055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:14:52.962275 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:52.979123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:14:52.996132 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:14:53.175974 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:14:53.183251 ignition[640]: Ignition 2.19.0 Jan 17 00:14:53.198329 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:14:53.183263 ignition[640]: Stage: fetch-offline Jan 17 00:14:53.222142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:14:53.183317 ignition[640]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.183329 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.270447 systemd-networkd[757]: lo: Link UP Jan 17 00:14:53.183454 ignition[640]: parsed url from cmdline: "" Jan 17 00:14:53.270452 systemd-networkd[757]: lo: Gained carrier Jan 17 00:14:53.183462 ignition[640]: no config URL provided Jan 17 00:14:53.272409 systemd-networkd[757]: Enumeration completed Jan 17 00:14:53.183472 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:14:53.272529 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:14:53.183484 ignition[640]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:14:53.273259 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:53.183492 ignition[640]: failed to fetch config: resource requires networking Jan 17 00:14:53.273267 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:14:53.183761 ignition[640]: Ignition finished successfully Jan 17 00:14:53.275432 systemd-networkd[757]: eth0: Link UP Jan 17 00:14:53.372455 ignition[760]: Ignition 2.19.0 Jan 17 00:14:53.275440 systemd-networkd[757]: eth0: Gained carrier Jan 17 00:14:53.372464 ignition[760]: Stage: fetch Jan 17 00:14:53.275452 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:53.372694 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.298291 systemd[1]: Reached target network.target - Network. Jan 17 00:14:53.372706 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.298971 systemd-networkd[757]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:14:53.372829 ignition[760]: parsed url from cmdline: "" Jan 17 00:14:53.298985 systemd-networkd[757]: eth0: DHCPv4 address 10.128.0.91/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:14:53.372836 ignition[760]: no config URL provided Jan 17 00:14:53.318090 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:14:53.372844 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:14:53.383078 unknown[760]: fetched base config from "system" Jan 17 00:14:53.372854 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:14:53.383092 unknown[760]: fetched base config from "system" Jan 17 00:14:53.372917 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:14:53.383102 unknown[760]: fetched user config from "gcp" Jan 17 00:14:53.376608 ignition[760]: GET result: OK Jan 17 00:14:53.386055 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:14:53.376717 ignition[760]: parsing config with SHA512: 00c23239f3a3d9ec33401188c7d86d2cba4fdfb84080532c542910b6ba48a2a72618ec3045a8a4c8b94976d5c5183bae7a450bc2a4b533d296c6744532099528 Jan 17 00:14:53.413453 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:14:53.383878 ignition[760]: fetch: fetch complete Jan 17 00:14:53.457382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:14:53.383905 ignition[760]: fetch: fetch passed Jan 17 00:14:53.482086 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:14:53.383967 ignition[760]: Ignition finished successfully Jan 17 00:14:53.558021 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:14:53.434080 ignition[767]: Ignition 2.19.0 Jan 17 00:14:53.572909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:14:53.434088 ignition[767]: Stage: kargs Jan 17 00:14:53.579182 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:14:53.434261 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.596192 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:14:53.434272 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.613196 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:14:53.435362 ignition[767]: kargs: kargs passed Jan 17 00:14:53.632221 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:14:53.435416 ignition[767]: Ignition finished successfully Jan 17 00:14:53.653176 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:14:53.555502 ignition[773]: Ignition 2.19.0 Jan 17 00:14:53.555510 ignition[773]: Stage: disks Jan 17 00:14:53.555709 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.555722 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.556757 ignition[773]: disks: disks passed Jan 17 00:14:53.556811 ignition[773]: Ignition finished successfully Jan 17 00:14:53.709523 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:14:53.842880 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:14:53.876078 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:14:53.992104 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:14:53.992978 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:14:53.993830 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:14:54.015025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:14:54.041321 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:14:54.065072 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Jan 17 00:14:54.065695 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:14:54.088495 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:54.088522 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:54.088538 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:14:54.065783 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:14:54.065821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:14:54.110481 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:14:54.110543 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:14:54.150876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:14:54.151251 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:14:54.181124 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:14:54.312880 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:14:54.323057 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:14:54.333014 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:14:54.343010 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:14:54.478294 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:14:54.508025 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:14:54.535039 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:54.526443 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:14:54.544370 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:14:54.584083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:14:54.594164 ignition[902]: INFO : Ignition 2.19.0 Jan 17 00:14:54.594164 ignition[902]: INFO : Stage: mount Jan 17 00:14:54.594164 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:54.594164 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:54.594164 ignition[902]: INFO : mount: mount passed Jan 17 00:14:54.594164 ignition[902]: INFO : Ignition finished successfully Jan 17 00:14:54.604302 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:14:54.628032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:14:54.716414 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Jan 17 00:14:54.716454 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:54.716480 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:54.666113 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:14:54.751049 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:14:54.751091 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:14:54.751116 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:14:54.748295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:14:54.786388 ignition[931]: INFO : Ignition 2.19.0 Jan 17 00:14:54.786388 ignition[931]: INFO : Stage: files Jan 17 00:14:54.801012 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:54.801012 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:54.801012 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:14:54.801012 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:14:54.801012 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:14:54.799026 unknown[931]: wrote ssh authorized keys file for user: core Jan 17 00:14:54.936026 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:14:55.058550 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:14:55.225086 systemd-networkd[757]: eth0: Gained IPv6LL Jan 17 00:14:55.534994 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:14:56.294731 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:14:56.313137 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:14:56.313137 ignition[931]: INFO : files: files passed Jan 17 00:14:56.313137 ignition[931]: INFO : Ignition finished successfully Jan 17 00:14:56.298935 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:14:56.330146 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:14:56.358177 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:14:56.408503 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:14:56.544004 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:56.544004 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:56.408655 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:14:56.603044 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:56.420336 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:14:56.432343 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:14:56.470087 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:14:56.546976 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:14:56.547125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:14:56.569313 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:14:56.593132 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:14:56.613169 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:14:56.619090 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:14:56.673117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:14:56.700080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:14:56.722315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:56.753271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:56.764314 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:14:56.783281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:14:56.783465 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:14:56.837175 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:14:56.863250 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:14:56.873273 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:14:56.888352 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:14:56.906322 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:14:56.925308 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:14:56.943289 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:14:56.960303 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:14:56.981294 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:14:56.998281 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:14:57.015216 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:14:57.015404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:14:57.056077 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:57.056437 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:57.074259 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:14:57.074419 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:57.094343 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:14:57.094543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:14:57.133329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:14:57.133536 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:14:57.141344 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:14:57.141512 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:14:57.208111 ignition[983]: INFO : Ignition 2.19.0 Jan 17 00:14:57.208111 ignition[983]: INFO : Stage: umount Jan 17 00:14:57.208111 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:57.208111 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:57.208111 ignition[983]: INFO : umount: umount passed Jan 17 00:14:57.208111 ignition[983]: INFO : Ignition finished successfully Jan 17 00:14:57.168113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:14:57.224176 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:14:57.251186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:14:57.251397 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:57.282233 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:14:57.282402 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:14:57.314524 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:14:57.315529 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:14:57.315658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:14:57.331588 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:14:57.331696 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:14:57.353071 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:14:57.353212 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:14:57.374002 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:14:57.374057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:14:57.393164 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:14:57.393222 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:14:57.401183 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:14:57.401233 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:14:57.418191 systemd[1]: Stopped target network.target - Network. Jan 17 00:14:57.435152 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:14:57.435223 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:14:57.450212 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:14:57.467141 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:14:57.470944 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:57.483153 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:14:57.512099 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:14:57.520186 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:14:57.520239 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:14:57.535182 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:14:57.535238 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:14:57.552177 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:14:57.552237 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:14:57.569202 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:14:57.569259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:14:57.586200 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:14:57.586258 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:14:57.603378 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:14:57.612947 systemd-networkd[757]: eth0: DHCPv6 lease lost Jan 17 00:14:57.631197 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:14:57.651484 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:14:57.651606 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:14:57.662597 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:14:57.662844 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:14:57.679625 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:14:57.679679 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:57.700005 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:14:57.725961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:14:57.726056 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:14:57.738061 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:14:57.738134 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:57.756139 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:14:57.756192 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:58.185994 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:14:57.764177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:14:57.764228 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:57.792257 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:57.801700 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:14:57.801860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:57.823259 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:14:57.823378 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:57.846057 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:14:57.846119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:57.864004 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:14:57.864079 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:14:57.881980 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:14:57.882060 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:14:57.908975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:14:57.909067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:57.943069 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:14:57.974981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:14:57.975078 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:57.975189 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:14:57.975234 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:58.004062 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:14:58.004139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:58.025071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:58.025153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:58.044541 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:14:58.044658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:14:58.064389 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:14:58.064497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:14:58.084127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:14:58.111164 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:14:58.137928 systemd[1]: Switching root. Jan 17 00:14:58.492982 systemd-journald[184]: Journal stopped Jan 17 00:14:50.076232 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:14:50.076276 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:50.076293 kernel: BIOS-provided physical RAM map: Jan 17 00:14:50.076307 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:14:50.076320 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:14:50.076334 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:14:50.076351 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:14:50.076370 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:14:50.076384 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:14:50.076399 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:14:50.076414 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:14:50.076428 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:14:50.076443 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:14:50.076457 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:14:50.076479 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:14:50.076495 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:14:50.076512 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:14:50.076536 kernel: NX (Execute Disable) protection: active Jan 17 00:14:50.076552 kernel: APIC: Static calls initialized Jan 17 00:14:50.076568 kernel: efi: EFI v2.7 by EDK II Jan 17 00:14:50.076585 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:14:50.076602 kernel: SMBIOS 2.4 present. Jan 17 00:14:50.076618 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:14:50.076633 kernel: Hypervisor detected: KVM Jan 17 00:14:50.076652 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:14:50.076667 kernel: kvm-clock: using sched offset of 12503798322 cycles Jan 17 00:14:50.076685 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:14:50.076702 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:14:50.076718 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:14:50.076735 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:14:50.076752 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:14:50.076768 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:14:50.076785 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:14:50.076805 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:14:50.076822 kernel: Using GB pages for direct mapping Jan 17 00:14:50.076839 kernel: Secure boot disabled Jan 17 00:14:50.076856 kernel: ACPI: Early table checksum verification disabled Jan 17 00:14:50.076873 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:14:50.076913 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:14:50.076931 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:14:50.076955 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:14:50.076976 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:14:50.076994 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:14:50.077012 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:14:50.077030 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:14:50.077047 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:14:50.077063 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:14:50.077085 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:14:50.077103 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:14:50.077120 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:14:50.077137 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:14:50.077155 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:14:50.077172 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:14:50.077189 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:14:50.077207 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:14:50.077224 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:14:50.077246 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:14:50.077264 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:14:50.077282 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:14:50.077300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:14:50.077318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:14:50.077336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:14:50.077354 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:14:50.077372 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:14:50.077391 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 17 00:14:50.077413 kernel: Zone ranges: Jan 17 00:14:50.077431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:14:50.077450 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:14:50.077468 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:14:50.077487 kernel: Movable zone start for each node Jan 17 00:14:50.077505 kernel: Early memory node ranges Jan 17 00:14:50.077523 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:14:50.077547 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:14:50.077565 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:14:50.077587 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:14:50.077605 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:14:50.077624 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:14:50.077642 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:14:50.077660 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:14:50.077678 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:14:50.077697 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:14:50.077715 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:14:50.077733 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:14:50.077751 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:14:50.077774 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:14:50.077793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:14:50.077811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:14:50.077830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:14:50.077848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:14:50.077867 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:14:50.077896 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:14:50.077927 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:14:50.077950 kernel: Booting paravirtualized kernel on KVM Jan 17 00:14:50.077968 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:14:50.077987 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:14:50.078005 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:14:50.078024 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:14:50.078040 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:14:50.078058 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:14:50.078077 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:14:50.078097 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:50.078120 kernel: random: crng init done Jan 17 00:14:50.078138 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:14:50.078157 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:14:50.078175 kernel: Fallback order for Node 0: 0 Jan 17 00:14:50.078193 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:14:50.078212 kernel: Policy zone: Normal Jan 17 00:14:50.078231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:14:50.078249 kernel: software IO TLB: area num 2. Jan 17 00:14:50.078268 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347140K reserved, 0K cma-reserved) Jan 17 00:14:50.078289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:14:50.078308 kernel: Kernel/User page tables isolation: enabled Jan 17 00:14:50.078326 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:14:50.078344 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:14:50.078362 kernel: Dynamic Preempt: voluntary Jan 17 00:14:50.078380 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:14:50.078399 kernel: rcu: RCU event tracing is enabled. Jan 17 00:14:50.078419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:14:50.078454 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:14:50.078472 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:14:50.078491 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:14:50.078510 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:14:50.078540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:14:50.078559 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:14:50.078577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:14:50.078596 kernel: Console: colour dummy device 80x25 Jan 17 00:14:50.078619 kernel: printk: console [ttyS0] enabled Jan 17 00:14:50.078638 kernel: ACPI: Core revision 20230628 Jan 17 00:14:50.078656 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:14:50.078673 kernel: x2apic enabled Jan 17 00:14:50.078692 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:14:50.078712 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:14:50.078732 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:14:50.078752 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:14:50.078771 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:14:50.078795 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:14:50.078814 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:14:50.078832 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:14:50.078851 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:14:50.078871 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:14:50.078905 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:14:50.078922 kernel: RETBleed: Mitigation: IBRS Jan 17 00:14:50.078937 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:14:50.078954 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:14:50.078975 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:14:50.078991 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:14:50.079007 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:14:50.079023 kernel: active return thunk: its_return_thunk Jan 17 00:14:50.079040 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:14:50.079057 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:14:50.079075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:14:50.079093 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:14:50.079112 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:14:50.079135 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:14:50.079153 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:14:50.079171 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:14:50.079190 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:14:50.079207 kernel: landlock: Up and running. Jan 17 00:14:50.079245 kernel: SELinux: Initializing. Jan 17 00:14:50.079265 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.079283 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.079303 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:14:50.079327 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:14:50.079345 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:14:50.079364 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:14:50.079383 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:14:50.079401 kernel: signal: max sigframe size: 1776 Jan 17 00:14:50.079419 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:14:50.079438 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:14:50.079457 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:14:50.079475 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:14:50.079497 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:14:50.079516 kernel: .... node #0, CPUs: #1 Jan 17 00:14:50.079543 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:14:50.079563 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:14:50.079582 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:14:50.079600 kernel: smpboot: Max logical packages: 1 Jan 17 00:14:50.079618 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:14:50.079636 kernel: devtmpfs: initialized Jan 17 00:14:50.079659 kernel: x86/mm: Memory block size: 128MB Jan 17 00:14:50.079678 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:14:50.079697 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:14:50.079715 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:14:50.079733 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:14:50.079752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:14:50.079770 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:14:50.079788 kernel: audit: type=2000 audit(1768608888.893:1): state=initialized audit_enabled=0 res=1 Jan 17 00:14:50.079806 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:14:50.079828 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:14:50.079846 kernel: cpuidle: using governor menu Jan 17 00:14:50.079865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:14:50.079897 kernel: dca service started, version 1.12.1 Jan 17 00:14:50.079916 kernel: PCI: Using configuration type 1 for base access Jan 17 00:14:50.079935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:14:50.079953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:14:50.079972 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:14:50.079990 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:14:50.080013 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:14:50.080031 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:14:50.080050 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:14:50.080069 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:14:50.080087 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:14:50.080106 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:14:50.080124 kernel: ACPI: Interpreter enabled Jan 17 00:14:50.080142 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:14:50.080160 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:14:50.080182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:14:50.080201 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:14:50.080220 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:14:50.080238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:14:50.080511 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:14:50.080725 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:14:50.080959 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:14:50.080993 kernel: PCI host bridge to bus 0000:00 Jan 17 00:14:50.081185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:14:50.081361 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:14:50.081540 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:14:50.081709 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:14:50.081878 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:14:50.082116 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:14:50.082330 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:14:50.082535 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:14:50.082729 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:14:50.082950 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:14:50.083158 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:14:50.083358 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:14:50.083579 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:14:50.083774 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:14:50.084034 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:14:50.084238 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:14:50.084432 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:14:50.084639 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:14:50.084666 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:14:50.084693 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:14:50.084711 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:14:50.084731 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:14:50.084751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:14:50.084771 kernel: iommu: Default domain type: Translated Jan 17 00:14:50.084791 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:14:50.084810 kernel: efivars: Registered efivars operations Jan 17 00:14:50.084829 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:14:50.084850 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:14:50.084869 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:14:50.084908 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:14:50.084927 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:14:50.084946 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:14:50.084965 kernel: vgaarb: loaded Jan 17 00:14:50.084986 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:14:50.085006 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:14:50.085025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:14:50.085044 kernel: pnp: PnP ACPI init Jan 17 00:14:50.085064 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:14:50.085089 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:14:50.085109 kernel: NET: Registered PF_INET protocol family Jan 17 00:14:50.085129 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:14:50.085148 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:14:50.085167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:14:50.085187 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:14:50.085207 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:14:50.085226 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:14:50.085250 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.085269 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:14:50.085288 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:14:50.085308 kernel: NET: Registered PF_XDP protocol family Jan 17 00:14:50.085490 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:14:50.085671 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:14:50.085841 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:14:50.086036 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:14:50.086237 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:14:50.086264 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:14:50.086283 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:14:50.086301 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:14:50.086319 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:14:50.086340 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:14:50.086360 kernel: clocksource: Switched to clocksource tsc Jan 17 00:14:50.086380 kernel: Initialise system trusted keyrings Jan 17 00:14:50.086404 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:14:50.086424 kernel: Key type asymmetric registered Jan 17 00:14:50.086442 kernel: Asymmetric key parser 'x509' registered Jan 17 00:14:50.086461 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:14:50.086480 kernel: io scheduler mq-deadline registered Jan 17 00:14:50.086500 kernel: io scheduler kyber registered Jan 17 00:14:50.086520 kernel: io scheduler bfq registered Jan 17 00:14:50.086545 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:14:50.086565 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:14:50.086800 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:14:50.086825 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:14:50.087039 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:14:50.087064 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:14:50.087243 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:14:50.087267 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:14:50.087285 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:14:50.087304 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:14:50.087322 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:14:50.087348 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:14:50.087541 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:14:50.087567 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:14:50.087586 kernel: i8042: Warning: Keylock active Jan 17 00:14:50.087604 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:14:50.087622 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:14:50.087809 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:14:50.088016 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:14:50.088188 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:14:49 UTC (1768608889) Jan 17 00:14:50.088355 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:14:50.088378 kernel: intel_pstate: CPU model not supported Jan 17 00:14:50.088397 kernel: pstore: Using crash dump compression: deflate Jan 17 00:14:50.088415 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:14:50.088433 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:14:50.088452 kernel: Segment Routing with IPv6 Jan 17 00:14:50.088471 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:14:50.088495 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:14:50.088514 kernel: Key type dns_resolver registered Jan 17 00:14:50.088540 kernel: IPI shorthand broadcast: enabled Jan 17 00:14:50.088559 kernel: sched_clock: Marking stable (816003794, 124513037)->(952176644, -11659813) Jan 17 00:14:50.088577 kernel: registered taskstats version 1 Jan 17 00:14:50.088596 kernel: Loading compiled-in X.509 certificates Jan 17 00:14:50.088614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:14:50.088632 kernel: Key type .fscrypt registered Jan 17 00:14:50.088650 kernel: Key type fscrypt-provisioning registered Jan 17 00:14:50.088672 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:14:50.088691 kernel: ima: No architecture policies found Jan 17 00:14:50.088709 kernel: clk: Disabling unused clocks Jan 17 00:14:50.088728 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:14:50.088746 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:14:50.088765 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:14:50.088784 kernel: Run /init as init process Jan 17 00:14:50.088802 kernel: with arguments: Jan 17 00:14:50.088820 kernel: /init Jan 17 00:14:50.088842 kernel: with environment: Jan 17 00:14:50.088860 kernel: HOME=/ Jan 17 00:14:50.088878 kernel: TERM=linux Jan 17 00:14:50.088933 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:14:50.088956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:14:50.088978 systemd[1]: Detected virtualization google. Jan 17 00:14:50.088998 systemd[1]: Detected architecture x86-64. Jan 17 00:14:50.089021 systemd[1]: Running in initrd. Jan 17 00:14:50.089040 systemd[1]: No hostname configured, using default hostname. Jan 17 00:14:50.089059 systemd[1]: Hostname set to . Jan 17 00:14:50.089079 systemd[1]: Initializing machine ID from random generator. Jan 17 00:14:50.089098 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:14:50.089117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:50.089137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:50.089157 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:14:50.089181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:14:50.089200 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:14:50.089220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:14:50.089242 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:14:50.089261 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:14:50.089281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:50.089301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:50.089324 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:14:50.089344 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:14:50.089385 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:14:50.089409 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:14:50.089428 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:14:50.089448 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:14:50.089472 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:14:50.089493 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:14:50.089513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:50.089539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:50.089560 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:50.089580 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:14:50.089600 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:14:50.089620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:14:50.089640 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:14:50.089664 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:14:50.089684 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:14:50.089704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:14:50.089724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:50.089744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:14:50.089795 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:14:50.089841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:50.089861 systemd-journald[184]: Journal started Jan 17 00:14:50.089913 systemd-journald[184]: Runtime Journal (/run/log/journal/16d69938261d4452a3b9cd7656c7b697) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:14:50.095445 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:14:50.108046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:14:50.099482 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:14:50.123442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:14:50.125582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:14:50.133862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:50.142162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:50.147998 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:14:50.145021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:50.152151 kernel: Bridge firewalling registered Jan 17 00:14:50.150976 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:14:50.156484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:50.171093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:14:50.182078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:14:50.183954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:50.206094 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:50.214283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:50.218236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:50.228105 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:14:50.231320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:14:50.264193 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:14:50.269026 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:14:50.292789 systemd-resolved[217]: Positive Trust Anchors: Jan 17 00:14:50.293221 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:14:50.293303 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:14:50.298491 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 17 00:14:50.300290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:14:50.323108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:50.375926 kernel: SCSI subsystem initialized Jan 17 00:14:50.387935 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:14:50.399929 kernel: iscsi: registered transport (tcp) Jan 17 00:14:50.423964 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:14:50.424028 kernel: QLogic iSCSI HBA Driver Jan 17 00:14:50.475914 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:14:50.483114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:14:50.522130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:14:50.522205 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:14:50.522242 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:14:50.567934 kernel: raid6: avx2x4 gen() 18130 MB/s Jan 17 00:14:50.584914 kernel: raid6: avx2x2 gen() 18208 MB/s Jan 17 00:14:50.602279 kernel: raid6: avx2x1 gen() 13957 MB/s Jan 17 00:14:50.602315 kernel: raid6: using algorithm avx2x2 gen() 18208 MB/s Jan 17 00:14:50.620297 kernel: raid6: .... xor() 17931 MB/s, rmw enabled Jan 17 00:14:50.620339 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:14:50.642920 kernel: xor: automatically using best checksumming function avx Jan 17 00:14:50.815933 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:14:50.829146 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:14:50.836140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:50.867200 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 17 00:14:50.874088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:50.885820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:14:50.916174 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 00:14:50.952462 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:14:50.966070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:14:51.046526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:51.059132 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:14:51.105079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:14:51.111744 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:14:51.115993 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:51.119988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:14:51.128094 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:14:51.157143 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:14:51.157237 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:14:51.191023 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:14:51.210052 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:14:51.219058 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:14:51.262219 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:14:51.262288 kernel: AES CTR mode by8 optimization enabled Jan 17 00:14:51.261771 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:14:51.261999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:51.266291 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:51.280965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:51.293000 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:14:51.293320 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:14:51.281188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:51.285025 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:51.298705 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:14:51.299926 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:14:51.300678 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:14:51.298593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:14:51.309911 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:14:51.309959 kernel: GPT:17805311 != 33554431 Jan 17 00:14:51.309992 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:14:51.310016 kernel: GPT:17805311 != 33554431 Jan 17 00:14:51.311202 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:14:51.311251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:51.313003 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:14:51.322937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:51.336192 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:14:51.383956 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (451) Jan 17 00:14:51.383663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:51.396035 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (461) Jan 17 00:14:51.405914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:14:51.420770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:14:51.436735 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:14:51.440990 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:14:51.452666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:14:51.463144 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:14:51.474729 disk-uuid[551]: Primary Header is updated. Jan 17 00:14:51.474729 disk-uuid[551]: Secondary Entries is updated. Jan 17 00:14:51.474729 disk-uuid[551]: Secondary Header is updated. Jan 17 00:14:51.485929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:51.502917 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:51.509991 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:52.510679 disk-uuid[552]: The operation has completed successfully. Jan 17 00:14:52.516062 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:14:52.588217 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:14:52.588375 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:14:52.605074 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:14:52.636205 sh[569]: Success Jan 17 00:14:52.657096 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:14:52.730337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:14:52.736968 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:14:52.763351 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:14:52.805464 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:14:52.805525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:52.805552 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:14:52.821851 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:14:52.821907 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:14:52.855927 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:14:52.861627 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:14:52.862609 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:14:52.868069 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:14:52.942294 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:52.942324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:52.942341 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:14:52.942355 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:14:52.942370 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:14:52.899055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:14:52.962275 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:52.979123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:14:52.996132 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:14:53.175974 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:14:53.183251 ignition[640]: Ignition 2.19.0 Jan 17 00:14:53.198329 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:14:53.183263 ignition[640]: Stage: fetch-offline Jan 17 00:14:53.222142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:14:53.183317 ignition[640]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.183329 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.270447 systemd-networkd[757]: lo: Link UP Jan 17 00:14:53.183454 ignition[640]: parsed url from cmdline: "" Jan 17 00:14:53.270452 systemd-networkd[757]: lo: Gained carrier Jan 17 00:14:53.183462 ignition[640]: no config URL provided Jan 17 00:14:53.272409 systemd-networkd[757]: Enumeration completed Jan 17 00:14:53.183472 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:14:53.272529 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:14:53.183484 ignition[640]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:14:53.273259 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:53.183492 ignition[640]: failed to fetch config: resource requires networking Jan 17 00:14:53.273267 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:14:53.183761 ignition[640]: Ignition finished successfully Jan 17 00:14:53.275432 systemd-networkd[757]: eth0: Link UP Jan 17 00:14:53.372455 ignition[760]: Ignition 2.19.0 Jan 17 00:14:53.275440 systemd-networkd[757]: eth0: Gained carrier Jan 17 00:14:53.372464 ignition[760]: Stage: fetch Jan 17 00:14:53.275452 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:14:53.372694 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.298291 systemd[1]: Reached target network.target - Network. Jan 17 00:14:53.372706 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.298971 systemd-networkd[757]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:14:53.372829 ignition[760]: parsed url from cmdline: "" Jan 17 00:14:53.298985 systemd-networkd[757]: eth0: DHCPv4 address 10.128.0.91/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:14:53.372836 ignition[760]: no config URL provided Jan 17 00:14:53.318090 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:14:53.372844 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:14:53.383078 unknown[760]: fetched base config from "system" Jan 17 00:14:53.372854 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:14:53.383092 unknown[760]: fetched base config from "system" Jan 17 00:14:53.372917 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:14:53.383102 unknown[760]: fetched user config from "gcp" Jan 17 00:14:53.376608 ignition[760]: GET result: OK Jan 17 00:14:53.386055 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:14:53.376717 ignition[760]: parsing config with SHA512: 00c23239f3a3d9ec33401188c7d86d2cba4fdfb84080532c542910b6ba48a2a72618ec3045a8a4c8b94976d5c5183bae7a450bc2a4b533d296c6744532099528 Jan 17 00:14:53.413453 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:14:53.383878 ignition[760]: fetch: fetch complete Jan 17 00:14:53.457382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:14:53.383905 ignition[760]: fetch: fetch passed Jan 17 00:14:53.482086 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:14:53.383967 ignition[760]: Ignition finished successfully Jan 17 00:14:53.558021 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:14:53.434080 ignition[767]: Ignition 2.19.0 Jan 17 00:14:53.572909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:14:53.434088 ignition[767]: Stage: kargs Jan 17 00:14:53.579182 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:14:53.434261 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.596192 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:14:53.434272 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.613196 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:14:53.435362 ignition[767]: kargs: kargs passed Jan 17 00:14:53.632221 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:14:53.435416 ignition[767]: Ignition finished successfully Jan 17 00:14:53.653176 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:14:53.555502 ignition[773]: Ignition 2.19.0 Jan 17 00:14:53.555510 ignition[773]: Stage: disks Jan 17 00:14:53.555709 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:53.555722 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:53.556757 ignition[773]: disks: disks passed Jan 17 00:14:53.556811 ignition[773]: Ignition finished successfully Jan 17 00:14:53.709523 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:14:53.842880 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:14:53.876078 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:14:53.992104 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:14:53.992978 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:14:53.993830 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:14:54.015025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:14:54.041321 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:14:54.065072 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Jan 17 00:14:54.065695 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:14:54.088495 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:54.088522 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:54.088538 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:14:54.065783 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:14:54.065821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:14:54.110481 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:14:54.110543 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:14:54.150876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:14:54.151251 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:14:54.181124 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:14:54.312880 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:14:54.323057 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:14:54.333014 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:14:54.343010 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:14:54.478294 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:14:54.508025 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:14:54.535039 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:54.526443 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:14:54.544370 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:14:54.584083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:14:54.594164 ignition[902]: INFO : Ignition 2.19.0 Jan 17 00:14:54.594164 ignition[902]: INFO : Stage: mount Jan 17 00:14:54.594164 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:54.594164 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:54.594164 ignition[902]: INFO : mount: mount passed Jan 17 00:14:54.594164 ignition[902]: INFO : Ignition finished successfully Jan 17 00:14:54.604302 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:14:54.628032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:14:54.716414 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Jan 17 00:14:54.716454 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:14:54.716480 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:14:54.666113 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:14:54.751049 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:14:54.751091 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:14:54.751116 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:14:54.748295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:14:54.786388 ignition[931]: INFO : Ignition 2.19.0 Jan 17 00:14:54.786388 ignition[931]: INFO : Stage: files Jan 17 00:14:54.801012 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:54.801012 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:54.801012 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:14:54.801012 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:14:54.801012 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:14:54.801012 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:14:54.799026 unknown[931]: wrote ssh authorized keys file for user: core Jan 17 00:14:54.936026 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:14:55.058550 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:55.075009 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:14:55.225086 systemd-networkd[757]: eth0: Gained IPv6LL Jan 17 00:14:55.534994 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:14:56.294731 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:14:56.313137 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:14:56.313137 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:14:56.313137 ignition[931]: INFO : files: files passed Jan 17 00:14:56.313137 ignition[931]: INFO : Ignition finished successfully Jan 17 00:14:56.298935 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:14:56.330146 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:14:56.358177 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:14:56.408503 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:14:56.544004 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:56.544004 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:56.408655 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:14:56.603044 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:14:56.420336 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:14:56.432343 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:14:56.470087 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:14:56.546976 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:14:56.547125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:14:56.569313 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:14:56.593132 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:14:56.613169 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:14:56.619090 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:14:56.673117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:14:56.700080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:14:56.722315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:14:56.753271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:14:56.764314 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:14:56.783281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:14:56.783465 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:14:56.837175 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:14:56.863250 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:14:56.873273 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:14:56.888352 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:14:56.906322 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:14:56.925308 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:14:56.943289 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:14:56.960303 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:14:56.981294 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:14:56.998281 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:14:57.015216 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:14:57.015404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:14:57.056077 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:14:57.056437 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:14:57.074259 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:14:57.074419 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:14:57.094343 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:14:57.094543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:14:57.133329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:14:57.133536 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:14:57.141344 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:14:57.141512 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:14:57.208111 ignition[983]: INFO : Ignition 2.19.0 Jan 17 00:14:57.208111 ignition[983]: INFO : Stage: umount Jan 17 00:14:57.208111 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:14:57.208111 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:14:57.208111 ignition[983]: INFO : umount: umount passed Jan 17 00:14:57.208111 ignition[983]: INFO : Ignition finished successfully Jan 17 00:14:57.168113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:14:57.224176 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:14:57.251186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:14:57.251397 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:14:57.282233 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:14:57.282402 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:14:57.314524 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:14:57.315529 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:14:57.315658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:14:57.331588 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:14:57.331696 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:14:57.353071 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:14:57.353212 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:14:57.374002 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:14:57.374057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:14:57.393164 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:14:57.393222 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:14:57.401183 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:14:57.401233 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:14:57.418191 systemd[1]: Stopped target network.target - Network. Jan 17 00:14:57.435152 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:14:57.435223 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:14:57.450212 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:14:57.467141 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:14:57.470944 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:14:57.483153 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:14:57.512099 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:14:57.520186 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:14:57.520239 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:14:57.535182 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:14:57.535238 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:14:57.552177 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:14:57.552237 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:14:57.569202 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:14:57.569259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:14:57.586200 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:14:57.586258 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:14:57.603378 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:14:57.612947 systemd-networkd[757]: eth0: DHCPv6 lease lost Jan 17 00:14:57.631197 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:14:57.651484 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:14:57.651606 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:14:57.662597 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:14:57.662844 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:14:57.679625 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:14:57.679679 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:14:57.700005 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:14:57.725961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:14:57.726056 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:14:57.738061 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:14:57.738134 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:14:57.756139 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:14:57.756192 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:14:58.185994 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:14:57.764177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:14:57.764228 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:14:57.792257 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:14:57.801700 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:14:57.801860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:14:57.823259 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:14:57.823378 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:14:57.846057 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:14:57.846119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:14:57.864004 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:14:57.864079 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:14:57.881980 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:14:57.882060 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:14:57.908975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:14:57.909067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:14:57.943069 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:14:57.974981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:14:57.975078 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:14:57.975189 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:14:57.975234 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:14:58.004062 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:14:58.004139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:14:58.025071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:14:58.025153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:14:58.044541 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:14:58.044658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:14:58.064389 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:14:58.064497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:14:58.084127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:14:58.111164 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:14:58.137928 systemd[1]: Switching root. Jan 17 00:14:58.492982 systemd-journald[184]: Journal stopped Jan 17 00:15:00.850277 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:15:00.850333 kernel: SELinux: policy capability open_perms=1 Jan 17 00:15:00.850356 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:15:00.850375 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:15:00.850394 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:15:00.850413 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:15:00.850435 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:15:00.850461 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:15:00.850481 kernel: audit: type=1403 audit(1768608898.804:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:15:00.850503 systemd[1]: Successfully loaded SELinux policy in 87.944ms. Jan 17 00:15:00.850527 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.476ms. Jan 17 00:15:00.850550 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:15:00.850573 systemd[1]: Detected virtualization google. Jan 17 00:15:00.850596 systemd[1]: Detected architecture x86-64. Jan 17 00:15:00.850625 systemd[1]: Detected first boot. Jan 17 00:15:00.850649 systemd[1]: Initializing machine ID from random generator. Jan 17 00:15:00.850671 zram_generator::config[1024]: No configuration found. Jan 17 00:15:00.850695 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:15:00.850718 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:15:00.850745 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:15:00.850769 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:15:00.850801 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:15:00.850825 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:15:00.850847 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:15:00.850870 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:15:00.850915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:15:00.850945 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:15:00.850968 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:15:00.850991 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:15:00.851014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:00.851037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:00.851060 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:15:00.851083 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:15:00.851106 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:15:00.851134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:15:00.851155 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:15:00.851177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:00.851199 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:15:00.851219 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:15:00.851242 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:15:00.851271 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:15:00.851294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:00.851318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:15:00.851344 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:15:00.851367 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:15:00.851388 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:15:00.851411 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:15:00.851435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:00.851459 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:00.851482 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:00.851511 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:15:00.851536 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:15:00.851559 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:15:00.851583 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:15:00.851608 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:00.851637 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:15:00.851661 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:15:00.851685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:15:00.851710 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:15:00.851734 systemd[1]: Reached target machines.target - Containers. Jan 17 00:15:00.851758 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:15:00.851781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:00.851818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:15:00.851846 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:15:00.851869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:00.855095 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:15:00.855135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:00.855160 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:15:00.855183 kernel: ACPI: bus type drm_connector registered Jan 17 00:15:00.855205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:00.855234 kernel: fuse: init (API version 7.39) Jan 17 00:15:00.855264 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:15:00.855289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:15:00.855311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:15:00.855334 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:15:00.855356 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:15:00.855378 kernel: loop: module loaded Jan 17 00:15:00.855397 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:15:00.855420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:15:00.855444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:15:00.855509 systemd-journald[1111]: Collecting audit messages is disabled. Jan 17 00:15:00.855564 systemd-journald[1111]: Journal started Jan 17 00:15:00.855611 systemd-journald[1111]: Runtime Journal (/run/log/journal/86a611a4b2ad462d8f448b699b373d88) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:15:00.858214 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:14:59.630018 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:14:59.651513 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:14:59.652096 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:15:00.877923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:15:00.909062 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:15:00.909131 systemd[1]: Stopped verity-setup.service. Jan 17 00:15:00.933907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:00.944933 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:15:00.955457 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:15:00.965228 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:15:00.975232 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:15:00.986285 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:15:00.996182 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:15:01.006163 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:15:01.016381 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:15:01.027294 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:01.038270 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:15:01.038504 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:15:01.050267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:01.050499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:01.062267 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:15:01.062492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:15:01.072253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:01.072465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:01.084266 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:15:01.084487 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:15:01.094284 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:01.094500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:01.104256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:01.114240 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:15:01.125258 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:15:01.136233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:01.161064 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:15:01.183014 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:15:01.195248 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:15:01.205031 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:15:01.205102 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:15:01.217291 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:15:01.234089 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:15:01.252099 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:15:01.262168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:01.269159 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:15:01.288139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:15:01.299051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:01.310316 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:15:01.317831 systemd-journald[1111]: Time spent on flushing to /var/log/journal/86a611a4b2ad462d8f448b699b373d88 is 135.287ms for 927 entries. Jan 17 00:15:01.317831 systemd-journald[1111]: System Journal (/var/log/journal/86a611a4b2ad462d8f448b699b373d88) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:15:01.488020 systemd-journald[1111]: Received client request to flush runtime journal. Jan 17 00:15:01.488633 kernel: loop0: detected capacity change from 0 to 219144 Jan 17 00:15:01.329047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:01.344195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:15:01.363593 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:15:01.383253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:15:01.399799 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:15:01.414225 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:15:01.425171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:15:01.436468 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:15:01.448429 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:15:01.465417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:01.485365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:15:01.505525 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:15:01.518601 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:15:01.545911 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:15:01.558326 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:15:01.560878 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 17 00:15:01.561068 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 17 00:15:01.587718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:15:01.604077 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:15:01.617392 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:15:01.629708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:15:01.635294 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:15:01.715920 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:15:01.749851 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:15:01.770713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:15:01.823920 kernel: loop3: detected capacity change from 0 to 54824 Jan 17 00:15:01.835504 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Jan 17 00:15:01.836094 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Jan 17 00:15:01.855277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:01.905687 kernel: loop4: detected capacity change from 0 to 219144 Jan 17 00:15:01.941122 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:15:01.996605 kernel: loop6: detected capacity change from 0 to 142488 Jan 17 00:15:02.047300 kernel: loop7: detected capacity change from 0 to 54824 Jan 17 00:15:02.083256 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 00:15:02.084202 (sd-merge)[1170]: Merged extensions into '/usr'. Jan 17 00:15:02.092634 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:15:02.092653 systemd[1]: Reloading... Jan 17 00:15:02.314918 zram_generator::config[1196]: No configuration found. Jan 17 00:15:02.452656 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:15:02.553590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:02.642861 systemd[1]: Reloading finished in 549 ms. Jan 17 00:15:02.678245 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:15:02.688348 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:15:02.699361 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:15:02.722202 systemd[1]: Starting ensure-sysext.service... Jan 17 00:15:02.737024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:15:02.760118 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:02.778977 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:15:02.778997 systemd[1]: Reloading... Jan 17 00:15:02.786424 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:15:02.787153 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:15:02.789023 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:15:02.789599 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 17 00:15:02.789725 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 17 00:15:02.795287 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:15:02.795311 systemd-tmpfiles[1238]: Skipping /boot Jan 17 00:15:02.821264 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:15:02.821288 systemd-tmpfiles[1238]: Skipping /boot Jan 17 00:15:02.849484 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 17 00:15:02.919041 zram_generator::config[1267]: No configuration found. Jan 17 00:15:03.196912 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1302) Jan 17 00:15:03.238531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:03.280915 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:15:03.294920 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:15:03.310910 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 17 00:15:03.335283 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:15:03.361916 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:15:03.386920 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:15:03.424234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:15:03.439476 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:15:03.441467 systemd[1]: Reloading finished in 661 ms. Jan 17 00:15:03.469921 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:15:03.472895 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:03.479912 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:15:03.494012 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:03.519428 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:15:03.542633 systemd[1]: Finished ensure-sysext.service. Jan 17 00:15:03.573415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:03.578105 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:03.596128 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:15:03.607279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:03.617151 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:15:03.633467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:03.654086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:15:03.658991 lvm[1349]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:15:03.670094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:03.688767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:03.704957 augenrules[1362]: No rules Jan 17 00:15:03.707123 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:15:03.716135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:03.722717 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:15:03.741307 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:15:03.762102 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:15:03.782108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:15:03.793017 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:15:03.811104 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:15:03.827114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:03.836983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:03.845164 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:03.856567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:15:03.868421 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:15:03.869103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:03.869330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:03.869723 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:15:03.869954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:15:03.870436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:03.870640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:03.871097 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:03.871311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:03.877969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:15:03.878384 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:15:03.887929 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:15:03.896863 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:15:03.899256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:03.903101 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:15:03.907258 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 00:15:03.907356 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:03.907448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:03.911373 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:15:03.915765 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:15:03.915842 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:15:03.928237 lvm[1391]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:15:03.976974 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:15:03.982215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:15:04.008563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:04.020573 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 00:15:04.033269 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:15:04.120838 systemd-networkd[1371]: lo: Link UP Jan 17 00:15:04.120858 systemd-networkd[1371]: lo: Gained carrier Jan 17 00:15:04.123466 systemd-networkd[1371]: Enumeration completed Jan 17 00:15:04.123644 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:15:04.124461 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:15:04.124478 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:15:04.125255 systemd-networkd[1371]: eth0: Link UP Jan 17 00:15:04.125267 systemd-networkd[1371]: eth0: Gained carrier Jan 17 00:15:04.125291 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:15:04.132202 systemd-resolved[1372]: Positive Trust Anchors: Jan 17 00:15:04.132221 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:15:04.132293 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:15:04.140044 systemd-networkd[1371]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:15:04.140069 systemd-networkd[1371]: eth0: DHCPv4 address 10.128.0.91/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:15:04.140509 systemd-resolved[1372]: Defaulting to hostname 'linux'. Jan 17 00:15:04.141099 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:15:04.152138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:15:04.162110 systemd[1]: Reached target network.target - Network. Jan 17 00:15:04.171003 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:04.182020 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:15:04.192134 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:15:04.203103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:15:04.214224 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:15:04.224120 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:15:04.235020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:15:04.245997 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:15:04.246054 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:15:04.253985 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:15:04.262589 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:15:04.273604 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:15:04.286426 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:15:04.296753 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:15:04.307119 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:15:04.316970 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:15:04.325031 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:15:04.325078 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:15:04.337024 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:15:04.352100 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:15:04.370125 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:15:04.402028 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:15:04.418801 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:15:04.429019 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:15:04.430448 jq[1423]: false Jan 17 00:15:04.439087 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:15:04.449168 coreos-metadata[1421]: Jan 17 00:15:04.449 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 00:15:04.455990 coreos-metadata[1421]: Jan 17 00:15:04.453 INFO Fetch successful Jan 17 00:15:04.455990 coreos-metadata[1421]: Jan 17 00:15:04.453 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 00:15:04.455990 coreos-metadata[1421]: Jan 17 00:15:04.453 INFO Fetch successful Jan 17 00:15:04.455990 coreos-metadata[1421]: Jan 17 00:15:04.453 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 00:15:04.456348 coreos-metadata[1421]: Jan 17 00:15:04.456 INFO Fetch successful Jan 17 00:15:04.456348 coreos-metadata[1421]: Jan 17 00:15:04.456 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 00:15:04.456094 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:15:04.458050 coreos-metadata[1421]: Jan 17 00:15:04.456 INFO Fetch successful Jan 17 00:15:04.474017 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:15:04.483530 extend-filesystems[1426]: Found loop4 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found loop5 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found loop6 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found loop7 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda1 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda2 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda3 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found usr Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda4 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda6 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda7 Jan 17 00:15:04.489111 extend-filesystems[1426]: Found sda9 Jan 17 00:15:04.489111 extend-filesystems[1426]: Checking size of /dev/sda9 Jan 17 00:15:04.489088 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:15:04.507553 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:15:04.515498 extend-filesystems[1426]: Resized partition /dev/sda9 Jan 17 00:15:04.582781 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 17 00:15:04.582854 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1287) Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: ---------------------------------------------------- Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: available at https://www.nwtime.org/support Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: ---------------------------------------------------- Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: proto: precision = 0.084 usec (-23) Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: basedate set to 2026-01-04 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: gps base set to 2026-01-04 (week 2400) Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Listen normally on 3 eth0 10.128.0.91:123 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5b%2#123 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Jan 17 00:15:04.582951 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 17 00:15:04.549092 dbus-daemon[1422]: [system] SELinux support is enabled Jan 17 00:15:04.584698 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:15:04.607099 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:15:04.607099 ntpd[1428]: 17 Jan 00:15:04 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:15:04.554473 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1371 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:15:04.607078 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:15:04.559661 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:15:04.559692 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:15:04.559707 ntpd[1428]: ---------------------------------------------------- Jan 17 00:15:04.559720 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:15:04.559733 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:15:04.559748 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 17 00:15:04.559762 ntpd[1428]: available at https://www.nwtime.org/support Jan 17 00:15:04.559776 ntpd[1428]: ---------------------------------------------------- Jan 17 00:15:04.570721 ntpd[1428]: proto: precision = 0.084 usec (-23) Jan 17 00:15:04.572088 ntpd[1428]: basedate set to 2026-01-04 Jan 17 00:15:04.572114 ntpd[1428]: gps base set to 2026-01-04 (week 2400) Jan 17 00:15:04.577351 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:15:04.577413 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:15:04.578110 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:15:04.578181 ntpd[1428]: Listen normally on 3 eth0 10.128.0.91:123 Jan 17 00:15:04.578254 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 17 00:15:04.578335 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:15:04.578366 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5b%2#123 Jan 17 00:15:04.632923 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 17 00:15:04.624603 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 00:15:04.578389 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Jan 17 00:15:04.625348 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:15:04.578434 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 17 00:15:04.582973 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:15:04.583005 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:15:04.635122 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:15:04.637257 extend-filesystems[1443]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:15:04.637257 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:15:04.637257 extend-filesystems[1443]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 17 00:15:04.698023 extend-filesystems[1426]: Resized filesystem in /dev/sda9 Jan 17 00:15:04.651051 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:15:04.676015 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:15:04.705425 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:15:04.706987 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:15:04.707722 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:15:04.708033 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:15:04.713397 jq[1454]: true Jan 17 00:15:04.718588 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:15:04.718837 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:15:04.738485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:15:04.739975 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:15:04.759992 update_engine[1452]: I20260117 00:15:04.759428 1452 main.cc:92] Flatcar Update Engine starting Jan 17 00:15:04.774226 update_engine[1452]: I20260117 00:15:04.774168 1452 update_check_scheduler.cc:74] Next update check in 11m52s Jan 17 00:15:04.782272 jq[1458]: true Jan 17 00:15:04.800465 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:15:04.823260 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:15:04.824819 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:15:04.824860 systemd-logind[1449]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 00:15:04.827951 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:15:04.828207 systemd-logind[1449]: New seat seat0. Jan 17 00:15:04.833183 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:15:04.865375 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:15:04.868779 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:15:04.902740 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:15:04.906541 tar[1457]: linux-amd64/LICENSE Jan 17 00:15:04.906541 tar[1457]: linux-amd64/helm Jan 17 00:15:04.920667 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:15:04.927365 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:15:04.933245 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:15:04.933522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:15:04.933759 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:15:04.954448 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:15:04.962158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:15:04.962566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:15:04.983282 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:15:05.001261 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:15:05.023252 systemd[1]: Starting sshkeys.service... Jan 17 00:15:05.055506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:15:05.074785 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:15:05.096263 systemd[1]: Started sshd@0-10.128.0.91:22-4.153.228.146:40562.service - OpenSSH per-connection server daemon (4.153.228.146:40562). Jan 17 00:15:05.128126 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:15:05.149853 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:15:05.197431 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:15:05.199310 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:15:05.221162 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:15:05.315482 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:15:05.339503 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:15:05.340508 coreos-metadata[1504]: Jan 17 00:15:05.340 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 00:15:05.346409 coreos-metadata[1504]: Jan 17 00:15:05.344 INFO Fetch failed with 404: resource not found Jan 17 00:15:05.346409 coreos-metadata[1504]: Jan 17 00:15:05.344 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 00:15:05.346409 coreos-metadata[1504]: Jan 17 00:15:05.345 INFO Fetch successful Jan 17 00:15:05.346409 coreos-metadata[1504]: Jan 17 00:15:05.346 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 00:15:05.349036 coreos-metadata[1504]: Jan 17 00:15:05.346 INFO Fetch failed with 404: resource not found Jan 17 00:15:05.349036 coreos-metadata[1504]: Jan 17 00:15:05.346 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 00:15:05.350197 coreos-metadata[1504]: Jan 17 00:15:05.349 INFO Fetch failed with 404: resource not found Jan 17 00:15:05.350197 coreos-metadata[1504]: Jan 17 00:15:05.350 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 00:15:05.353122 coreos-metadata[1504]: Jan 17 00:15:05.352 INFO Fetch successful Jan 17 00:15:05.359459 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:15:05.359659 unknown[1504]: wrote ssh authorized keys file for user: core Jan 17 00:15:05.366497 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:15:05.368369 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1495 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:15:05.371156 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:15:05.392518 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:15:05.417372 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:15:05.448258 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:15:05.456286 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:15:05.482351 systemd[1]: Finished sshkeys.service. Jan 17 00:15:05.529565 polkitd[1521]: Started polkitd version 121 Jan 17 00:15:05.558006 polkitd[1521]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:15:05.558101 polkitd[1521]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:15:05.561401 ntpd[1428]: bind(24) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:15:05.561967 ntpd[1428]: 17 Jan 00:15:05 ntpd[1428]: bind(24) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:15:05.561967 ntpd[1428]: 17 Jan 00:15:05 ntpd[1428]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:5b%2#123 Jan 17 00:15:05.561967 ntpd[1428]: 17 Jan 00:15:05 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Jan 17 00:15:05.561447 ntpd[1428]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:5b%2#123 Jan 17 00:15:05.561470 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Jan 17 00:15:05.565536 polkitd[1521]: Finished loading, compiling and executing 2 rules Jan 17 00:15:05.568413 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:15:05.568783 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:15:05.569917 polkitd[1521]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:15:05.619478 systemd-hostnamed[1495]: Hostname set to (transient) Jan 17 00:15:05.622970 systemd-resolved[1372]: System hostname changed to 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3'. Jan 17 00:15:05.639937 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:15:05.672912 containerd[1459]: time="2026-01-17T00:15:05.672174043Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:15:05.675769 sshd[1503]: Accepted publickey for core from 4.153.228.146 port 40562 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:05.676737 sshd[1503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:05.700539 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:15:05.720286 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:15:05.738636 systemd-logind[1449]: New session 1 of user core. Jan 17 00:15:05.764876 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:15:05.777187 containerd[1459]: time="2026-01-17T00:15:05.777138560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.781807 containerd[1459]: time="2026-01-17T00:15:05.781753464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:05.782446 containerd[1459]: time="2026-01-17T00:15:05.781943884Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:15:05.782446 containerd[1459]: time="2026-01-17T00:15:05.781981091Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:15:05.782446 containerd[1459]: time="2026-01-17T00:15:05.782202898Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:15:05.782446 containerd[1459]: time="2026-01-17T00:15:05.782233626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.782446 containerd[1459]: time="2026-01-17T00:15:05.782322256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:05.782446 containerd[1459]: time="2026-01-17T00:15:05.782343180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.783016 containerd[1459]: time="2026-01-17T00:15:05.782982001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:05.785650 containerd[1459]: time="2026-01-17T00:15:05.784956230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.785650 containerd[1459]: time="2026-01-17T00:15:05.784996147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:05.785650 containerd[1459]: time="2026-01-17T00:15:05.785019202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.785650 containerd[1459]: time="2026-01-17T00:15:05.785151073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.785650 containerd[1459]: time="2026-01-17T00:15:05.785594574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:05.785332 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:15:05.786681 containerd[1459]: time="2026-01-17T00:15:05.786205009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:05.786681 containerd[1459]: time="2026-01-17T00:15:05.786240433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:15:05.786681 containerd[1459]: time="2026-01-17T00:15:05.786375413Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:15:05.786681 containerd[1459]: time="2026-01-17T00:15:05.786449253Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:15:05.791252 containerd[1459]: time="2026-01-17T00:15:05.791221583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.792963002Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.793046213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.793077063Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.793103381Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.793273165Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.793802016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.793985978Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794016178Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794037785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794060518Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794085502Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794107142Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794131717Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.794749 containerd[1459]: time="2026-01-17T00:15:05.794156814Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794179303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794201330Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794223078Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794257378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794282628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794329010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794354548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794376649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794398512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794429682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794449780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794470208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794495472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.795407 containerd[1459]: time="2026-01-17T00:15:05.794514277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.796050 containerd[1459]: time="2026-01-17T00:15:05.794535597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.796050 containerd[1459]: time="2026-01-17T00:15:05.794557887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.796050 containerd[1459]: time="2026-01-17T00:15:05.794593894Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:15:05.796050 containerd[1459]: time="2026-01-17T00:15:05.794632471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.796050 containerd[1459]: time="2026-01-17T00:15:05.794655576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.796050 containerd[1459]: time="2026-01-17T00:15:05.794674284Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.797970281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798094291Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798118363Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798141864Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798159760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798182891Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798207127Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:15:05.799516 containerd[1459]: time="2026-01-17T00:15:05.798226127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:15:05.799944 containerd[1459]: time="2026-01-17T00:15:05.798688837Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:15:05.799944 containerd[1459]: time="2026-01-17T00:15:05.798786159Z" level=info msg="Connect containerd service" Jan 17 00:15:05.799944 containerd[1459]: time="2026-01-17T00:15:05.798842765Z" level=info msg="using legacy CRI server" Jan 17 00:15:05.799944 containerd[1459]: time="2026-01-17T00:15:05.798856142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:15:05.799944 containerd[1459]: time="2026-01-17T00:15:05.799027415Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:15:05.802688 containerd[1459]: time="2026-01-17T00:15:05.802377491Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.803619862Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.803712635Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804331764Z" level=info msg="Start subscribing containerd event" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804406566Z" level=info msg="Start recovering state" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804580179Z" level=info msg="Start event monitor" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804606596Z" level=info msg="Start snapshots syncer" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804622643Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804644237Z" level=info msg="Start streaming server" Jan 17 00:15:05.805012 containerd[1459]: time="2026-01-17T00:15:05.804735137Z" level=info msg="containerd successfully booted in 0.135741s" Jan 17 00:15:05.804834 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:15:05.823087 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:15:05.849093 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 17 00:15:05.854677 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:15:05.866714 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:15:05.888693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:05.906230 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:15:05.926232 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 00:15:05.960088 init.sh[1552]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 00:15:05.961727 init.sh[1552]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 00:15:05.961727 init.sh[1552]: + /usr/bin/google_instance_setup Jan 17 00:15:05.966956 tar[1457]: linux-amd64/README.md Jan 17 00:15:05.996407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:15:06.025674 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:15:06.092090 systemd[1542]: Queued start job for default target default.target. Jan 17 00:15:06.097123 systemd[1542]: Created slice app.slice - User Application Slice. Jan 17 00:15:06.097166 systemd[1542]: Reached target paths.target - Paths. Jan 17 00:15:06.097191 systemd[1542]: Reached target timers.target - Timers. Jan 17 00:15:06.100938 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:15:06.125582 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:15:06.125759 systemd[1542]: Reached target sockets.target - Sockets. Jan 17 00:15:06.125785 systemd[1542]: Reached target basic.target - Basic System. Jan 17 00:15:06.125852 systemd[1542]: Reached target default.target - Main User Target. Jan 17 00:15:06.125936 systemd[1542]: Startup finished in 288ms. Jan 17 00:15:06.126112 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:15:06.141142 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:15:06.354915 systemd[1]: Started sshd@1-10.128.0.91:22-4.153.228.146:40572.service - OpenSSH per-connection server daemon (4.153.228.146:40572). Jan 17 00:15:06.602995 sshd[1570]: Accepted publickey for core from 4.153.228.146 port 40572 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:06.604183 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:06.615585 systemd-logind[1449]: New session 2 of user core. Jan 17 00:15:06.627927 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:15:06.639937 instance-setup[1556]: INFO Running google_set_multiqueue. Jan 17 00:15:06.660226 instance-setup[1556]: INFO Set channels for eth0 to 2. Jan 17 00:15:06.665683 instance-setup[1556]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 17 00:15:06.667516 instance-setup[1556]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 17 00:15:06.668035 instance-setup[1556]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 17 00:15:06.669880 instance-setup[1556]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 17 00:15:06.670430 instance-setup[1556]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 17 00:15:06.672304 instance-setup[1556]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 17 00:15:06.672700 instance-setup[1556]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 17 00:15:06.674601 instance-setup[1556]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 17 00:15:06.684752 instance-setup[1556]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:15:06.689089 instance-setup[1556]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:15:06.691131 instance-setup[1556]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 00:15:06.691183 instance-setup[1556]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 00:15:06.708337 init.sh[1552]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 00:15:06.803199 sshd[1570]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:06.812240 systemd[1]: sshd@1-10.128.0.91:22-4.153.228.146:40572.service: Deactivated successfully. Jan 17 00:15:06.817858 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:15:06.820304 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:15:06.825867 systemd-logind[1449]: Removed session 2. Jan 17 00:15:06.851242 systemd[1]: Started sshd@2-10.128.0.91:22-4.153.228.146:40578.service - OpenSSH per-connection server daemon (4.153.228.146:40578). Jan 17 00:15:06.890503 startup-script[1603]: INFO Starting startup scripts. Jan 17 00:15:06.895996 startup-script[1603]: INFO No startup scripts found in metadata. Jan 17 00:15:06.896072 startup-script[1603]: INFO Finished running startup scripts. Jan 17 00:15:06.915142 init.sh[1552]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 00:15:06.915142 init.sh[1552]: + daemon_pids=() Jan 17 00:15:06.915299 init.sh[1552]: + for d in accounts clock_skew network Jan 17 00:15:06.915778 init.sh[1552]: + daemon_pids+=($!) Jan 17 00:15:06.915778 init.sh[1552]: + for d in accounts clock_skew network Jan 17 00:15:06.915922 init.sh[1612]: + /usr/bin/google_accounts_daemon Jan 17 00:15:06.917673 init.sh[1552]: + daemon_pids+=($!) Jan 17 00:15:06.917673 init.sh[1552]: + for d in accounts clock_skew network Jan 17 00:15:06.917673 init.sh[1552]: + daemon_pids+=($!) Jan 17 00:15:06.917673 init.sh[1552]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 00:15:06.917673 init.sh[1552]: + /usr/bin/systemd-notify --ready Jan 17 00:15:06.919195 init.sh[1613]: + /usr/bin/google_clock_skew_daemon Jan 17 00:15:06.920977 init.sh[1614]: + /usr/bin/google_network_daemon Jan 17 00:15:06.944497 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 00:15:06.957595 init.sh[1552]: + wait -n 1612 1613 1614 Jan 17 00:15:07.113028 sshd[1610]: Accepted publickey for core from 4.153.228.146 port 40578 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:07.113464 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:07.127846 systemd-logind[1449]: New session 3 of user core. Jan 17 00:15:07.131105 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:15:07.303530 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:07.315219 google-networking[1614]: INFO Starting Google Networking daemon. Jan 17 00:15:07.316616 systemd[1]: sshd@2-10.128.0.91:22-4.153.228.146:40578.service: Deactivated successfully. Jan 17 00:15:07.319302 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:15:07.326266 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:15:07.332127 systemd-logind[1449]: Removed session 3. Jan 17 00:15:07.363398 google-clock-skew[1613]: INFO Starting Google Clock Skew daemon. Jan 17 00:15:07.376662 google-clock-skew[1613]: INFO Clock drift token has changed: 0. Jan 17 00:15:07.386819 groupadd[1627]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 00:15:07.391087 groupadd[1627]: group added to /etc/gshadow: name=google-sudoers Jan 17 00:15:07.443379 groupadd[1627]: new group: name=google-sudoers, GID=1000 Jan 17 00:15:07.473712 google-accounts[1612]: INFO Starting Google Accounts daemon. Jan 17 00:15:07.486920 google-accounts[1612]: WARNING OS Login not installed. Jan 17 00:15:07.488096 google-accounts[1612]: INFO Creating a new user account for 0. Jan 17 00:15:07.492809 init.sh[1636]: useradd: invalid user name '0': use --badname to ignore Jan 17 00:15:07.492384 google-accounts[1612]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 00:15:07.814276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:07.825723 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:15:07.830494 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:07.836566 systemd[1]: Startup finished in 991ms (kernel) + 9.032s (initrd) + 9.107s (userspace) = 19.131s. Jan 17 00:15:08.000810 systemd-resolved[1372]: Clock change detected. Flushing caches. Jan 17 00:15:08.001582 google-clock-skew[1613]: INFO Synced system time with hardware clock. Jan 17 00:15:08.572164 kubelet[1643]: E0117 00:15:08.572033 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:08.575125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:08.575375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:08.575965 systemd[1]: kubelet.service: Consumed 1.139s CPU time. Jan 17 00:15:08.601366 ntpd[1428]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:5b%2]:123 Jan 17 00:15:08.601711 ntpd[1428]: 17 Jan 00:15:08 ntpd[1428]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:5b%2]:123 Jan 17 00:15:17.386377 systemd[1]: Started sshd@3-10.128.0.91:22-4.153.228.146:59248.service - OpenSSH per-connection server daemon (4.153.228.146:59248). Jan 17 00:15:17.606516 sshd[1655]: Accepted publickey for core from 4.153.228.146 port 59248 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:17.608386 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:17.613547 systemd-logind[1449]: New session 4 of user core. Jan 17 00:15:17.617253 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:15:17.774153 sshd[1655]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:17.779551 systemd[1]: sshd@3-10.128.0.91:22-4.153.228.146:59248.service: Deactivated successfully. Jan 17 00:15:17.781915 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:15:17.782967 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:15:17.784503 systemd-logind[1449]: Removed session 4. Jan 17 00:15:17.829416 systemd[1]: Started sshd@4-10.128.0.91:22-4.153.228.146:59262.service - OpenSSH per-connection server daemon (4.153.228.146:59262). Jan 17 00:15:18.064293 sshd[1662]: Accepted publickey for core from 4.153.228.146 port 59262 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:18.066171 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:18.072552 systemd-logind[1449]: New session 5 of user core. Jan 17 00:15:18.082311 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:15:18.243395 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:18.247868 systemd[1]: sshd@4-10.128.0.91:22-4.153.228.146:59262.service: Deactivated successfully. Jan 17 00:15:18.250318 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:15:18.252130 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:15:18.253505 systemd-logind[1449]: Removed session 5. Jan 17 00:15:18.292408 systemd[1]: Started sshd@5-10.128.0.91:22-4.153.228.146:59272.service - OpenSSH per-connection server daemon (4.153.228.146:59272). Jan 17 00:15:18.509269 sshd[1669]: Accepted publickey for core from 4.153.228.146 port 59272 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:18.511170 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:18.517433 systemd-logind[1449]: New session 6 of user core. Jan 17 00:15:18.528237 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:15:18.650727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:15:18.656319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:18.682339 sshd[1669]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:18.687519 systemd[1]: sshd@5-10.128.0.91:22-4.153.228.146:59272.service: Deactivated successfully. Jan 17 00:15:18.689686 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:15:18.690762 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:15:18.692790 systemd-logind[1449]: Removed session 6. Jan 17 00:15:18.723184 systemd[1]: Started sshd@6-10.128.0.91:22-4.153.228.146:59280.service - OpenSSH per-connection server daemon (4.153.228.146:59280). Jan 17 00:15:18.947925 sshd[1679]: Accepted publickey for core from 4.153.228.146 port 59280 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:18.949786 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:18.956035 systemd-logind[1449]: New session 7 of user core. Jan 17 00:15:18.961269 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:15:18.996245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:19.004538 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:19.055533 kubelet[1687]: E0117 00:15:19.055471 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:19.059759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:19.060013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:19.111945 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:15:19.112532 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:19.125709 sudo[1694]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:19.157125 sshd[1679]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:19.161523 systemd[1]: sshd@6-10.128.0.91:22-4.153.228.146:59280.service: Deactivated successfully. Jan 17 00:15:19.163875 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:15:19.165640 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:15:19.167134 systemd-logind[1449]: Removed session 7. Jan 17 00:15:19.204416 systemd[1]: Started sshd@7-10.128.0.91:22-4.153.228.146:59286.service - OpenSSH per-connection server daemon (4.153.228.146:59286). Jan 17 00:15:19.424118 sshd[1699]: Accepted publickey for core from 4.153.228.146 port 59286 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:19.425928 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:19.432109 systemd-logind[1449]: New session 8 of user core. Jan 17 00:15:19.444232 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:15:19.568353 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:15:19.568854 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:19.573675 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:19.586593 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:15:19.587112 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:19.605456 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:19.607693 auditctl[1706]: No rules Jan 17 00:15:19.608226 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:15:19.608483 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:19.615603 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:19.647003 augenrules[1724]: No rules Jan 17 00:15:19.647835 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:19.650165 sudo[1702]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:19.681921 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:19.686202 systemd[1]: sshd@7-10.128.0.91:22-4.153.228.146:59286.service: Deactivated successfully. Jan 17 00:15:19.688447 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:15:19.690125 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:15:19.691612 systemd-logind[1449]: Removed session 8. Jan 17 00:15:19.732695 systemd[1]: Started sshd@8-10.128.0.91:22-4.153.228.146:59302.service - OpenSSH per-connection server daemon (4.153.228.146:59302). Jan 17 00:15:19.946495 sshd[1732]: Accepted publickey for core from 4.153.228.146 port 59302 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:15:19.948638 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:19.954127 systemd-logind[1449]: New session 9 of user core. Jan 17 00:15:19.961241 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:15:20.092552 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:15:20.093091 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:20.530446 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:15:20.533257 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:15:20.956978 dockerd[1750]: time="2026-01-17T00:15:20.956897267Z" level=info msg="Starting up" Jan 17 00:15:21.069563 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1000152757-merged.mount: Deactivated successfully. Jan 17 00:15:21.101251 dockerd[1750]: time="2026-01-17T00:15:21.101207476Z" level=info msg="Loading containers: start." Jan 17 00:15:21.238075 kernel: Initializing XFRM netlink socket Jan 17 00:15:21.341153 systemd-networkd[1371]: docker0: Link UP Jan 17 00:15:21.362810 dockerd[1750]: time="2026-01-17T00:15:21.362751936Z" level=info msg="Loading containers: done." Jan 17 00:15:21.380124 dockerd[1750]: time="2026-01-17T00:15:21.380037911Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:15:21.380337 dockerd[1750]: time="2026-01-17T00:15:21.380262240Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:15:21.380499 dockerd[1750]: time="2026-01-17T00:15:21.380442909Z" level=info msg="Daemon has completed initialization" Jan 17 00:15:21.384166 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck895895488-merged.mount: Deactivated successfully. Jan 17 00:15:21.417154 dockerd[1750]: time="2026-01-17T00:15:21.417002540Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:15:21.417742 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:15:22.408623 containerd[1459]: time="2026-01-17T00:15:22.408576458Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:15:22.937758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616667384.mount: Deactivated successfully. Jan 17 00:15:24.559903 containerd[1459]: time="2026-01-17T00:15:24.559837061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:24.561512 containerd[1459]: time="2026-01-17T00:15:24.561449217Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27076160" Jan 17 00:15:24.564065 containerd[1459]: time="2026-01-17T00:15:24.562624947Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:24.566033 containerd[1459]: time="2026-01-17T00:15:24.565992444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:24.567615 containerd[1459]: time="2026-01-17T00:15:24.567572449Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.158944316s" Jan 17 00:15:24.567771 containerd[1459]: time="2026-01-17T00:15:24.567743614Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:15:24.568623 containerd[1459]: time="2026-01-17T00:15:24.568594945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:15:26.119279 containerd[1459]: time="2026-01-17T00:15:26.119214293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:26.120900 containerd[1459]: time="2026-01-17T00:15:26.120834816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21164498" Jan 17 00:15:26.122345 containerd[1459]: time="2026-01-17T00:15:26.121759046Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:26.125268 containerd[1459]: time="2026-01-17T00:15:26.125227705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:26.126822 containerd[1459]: time="2026-01-17T00:15:26.126778746Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.557895829s" Jan 17 00:15:26.126984 containerd[1459]: time="2026-01-17T00:15:26.126957683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:15:26.128247 containerd[1459]: time="2026-01-17T00:15:26.128213890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:15:27.260370 containerd[1459]: time="2026-01-17T00:15:27.260307624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:27.262062 containerd[1459]: time="2026-01-17T00:15:27.261971961Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15727967" Jan 17 00:15:27.263401 containerd[1459]: time="2026-01-17T00:15:27.263339652Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:27.266902 containerd[1459]: time="2026-01-17T00:15:27.266860550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:27.269181 containerd[1459]: time="2026-01-17T00:15:27.268447771Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.14018572s" Jan 17 00:15:27.269181 containerd[1459]: time="2026-01-17T00:15:27.268509748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:15:27.269602 containerd[1459]: time="2026-01-17T00:15:27.269571700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:15:28.369135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809928759.mount: Deactivated successfully. Jan 17 00:15:28.862442 containerd[1459]: time="2026-01-17T00:15:28.862370199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:28.863927 containerd[1459]: time="2026-01-17T00:15:28.863686115Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25967316" Jan 17 00:15:28.866370 containerd[1459]: time="2026-01-17T00:15:28.864896483Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:28.868542 containerd[1459]: time="2026-01-17T00:15:28.867501229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:28.868542 containerd[1459]: time="2026-01-17T00:15:28.868366977Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.598644946s" Jan 17 00:15:28.868542 containerd[1459]: time="2026-01-17T00:15:28.868418769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:15:28.869448 containerd[1459]: time="2026-01-17T00:15:28.869418609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:15:29.285646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:15:29.292316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:29.319121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510615451.mount: Deactivated successfully. Jan 17 00:15:29.655445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:29.664561 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:29.764999 kubelet[1978]: E0117 00:15:29.764942 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:29.770139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:29.770407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:30.914149 containerd[1459]: time="2026-01-17T00:15:30.914092641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:30.915861 containerd[1459]: time="2026-01-17T00:15:30.915796712Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22395089" Jan 17 00:15:30.916773 containerd[1459]: time="2026-01-17T00:15:30.916706842Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:30.920613 containerd[1459]: time="2026-01-17T00:15:30.920577541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:30.922488 containerd[1459]: time="2026-01-17T00:15:30.922232577Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.052773809s" Jan 17 00:15:30.922488 containerd[1459]: time="2026-01-17T00:15:30.922279436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:15:30.923255 containerd[1459]: time="2026-01-17T00:15:30.923189792Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:15:31.269982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263442778.mount: Deactivated successfully. Jan 17 00:15:31.276920 containerd[1459]: time="2026-01-17T00:15:31.276862869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.278114 containerd[1459]: time="2026-01-17T00:15:31.278037913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=322216" Jan 17 00:15:31.280070 containerd[1459]: time="2026-01-17T00:15:31.278866532Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.281666 containerd[1459]: time="2026-01-17T00:15:31.281608076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.282880 containerd[1459]: time="2026-01-17T00:15:31.282686411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 359.440217ms" Jan 17 00:15:31.282880 containerd[1459]: time="2026-01-17T00:15:31.282729827Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:15:31.283879 containerd[1459]: time="2026-01-17T00:15:31.283839120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:15:31.723280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379687530.mount: Deactivated successfully. Jan 17 00:15:34.536584 containerd[1459]: time="2026-01-17T00:15:34.536511379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:34.538446 containerd[1459]: time="2026-01-17T00:15:34.538378470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74172832" Jan 17 00:15:34.539985 containerd[1459]: time="2026-01-17T00:15:34.539389250Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:34.543187 containerd[1459]: time="2026-01-17T00:15:34.543144247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:34.544906 containerd[1459]: time="2026-01-17T00:15:34.544866809Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.260991408s" Jan 17 00:15:34.545085 containerd[1459]: time="2026-01-17T00:15:34.545027510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:15:35.696479 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:15:38.918079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:38.928949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:38.969302 systemd[1]: Reloading requested from client PID 2118 ('systemctl') (unit session-9.scope)... Jan 17 00:15:38.969455 systemd[1]: Reloading... Jan 17 00:15:39.140081 zram_generator::config[2161]: No configuration found. Jan 17 00:15:39.293756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:39.395862 systemd[1]: Reloading finished in 425 ms. Jan 17 00:15:39.452389 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:15:39.452531 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:15:39.452975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:39.457606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:39.817940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:39.831539 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:15:39.890865 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:15:39.890865 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:39.892758 kubelet[2208]: I0117 00:15:39.892687 2208 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:15:40.459581 kubelet[2208]: I0117 00:15:40.459523 2208 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:15:40.459581 kubelet[2208]: I0117 00:15:40.459557 2208 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:15:40.461422 kubelet[2208]: I0117 00:15:40.461384 2208 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:15:40.461422 kubelet[2208]: I0117 00:15:40.461416 2208 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:15:40.461772 kubelet[2208]: I0117 00:15:40.461738 2208 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:15:40.468570 kubelet[2208]: E0117 00:15:40.468491 2208 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:15:40.469701 kubelet[2208]: I0117 00:15:40.469091 2208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:40.474634 kubelet[2208]: E0117 00:15:40.474588 2208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:15:40.474777 kubelet[2208]: I0117 00:15:40.474675 2208 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:15:40.482888 kubelet[2208]: I0117 00:15:40.482585 2208 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:15:40.482992 kubelet[2208]: I0117 00:15:40.482892 2208 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:15:40.483251 kubelet[2208]: I0117 00:15:40.482922 2208 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:15:40.483251 kubelet[2208]: I0117 00:15:40.483262 2208 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:15:40.483491 kubelet[2208]: I0117 00:15:40.483278 2208 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:15:40.483491 kubelet[2208]: I0117 00:15:40.483390 2208 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:15:40.485981 kubelet[2208]: I0117 00:15:40.485933 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:40.488182 kubelet[2208]: I0117 00:15:40.488142 2208 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:15:40.488182 kubelet[2208]: I0117 00:15:40.488171 2208 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:15:40.490063 kubelet[2208]: I0117 00:15:40.488630 2208 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:15:40.490063 kubelet[2208]: I0117 00:15:40.488666 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:15:40.490063 kubelet[2208]: E0117 00:15:40.488760 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:15:40.491889 kubelet[2208]: I0117 00:15:40.491860 2208 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:15:40.492756 kubelet[2208]: I0117 00:15:40.492724 2208 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:15:40.492856 kubelet[2208]: I0117 00:15:40.492781 2208 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:15:40.492856 kubelet[2208]: W0117 00:15:40.492845 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:15:40.499881 kubelet[2208]: E0117 00:15:40.499834 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:15:40.508902 kubelet[2208]: I0117 00:15:40.508876 2208 server.go:1262] "Started kubelet" Jan 17 00:15:40.511366 kubelet[2208]: I0117 00:15:40.510459 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:15:40.513898 kubelet[2208]: I0117 00:15:40.513853 2208 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:15:40.516329 kubelet[2208]: I0117 00:15:40.516305 2208 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:15:40.519024 kubelet[2208]: E0117 00:15:40.517034 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3.188b5c75fd9b2df1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,UID:ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,},FirstTimestamp:2026-01-17 00:15:40.508786161 +0000 UTC m=+0.672097010,LastTimestamp:2026-01-17 00:15:40.508786161 +0000 UTC m=+0.672097010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,}" Jan 17 00:15:40.522946 kubelet[2208]: I0117 00:15:40.522918 2208 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:15:40.523224 kubelet[2208]: I0117 00:15:40.523202 2208 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:15:40.523692 kubelet[2208]: I0117 00:15:40.523671 2208 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:15:40.524295 kubelet[2208]: I0117 00:15:40.524272 2208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:15:40.525166 kubelet[2208]: I0117 00:15:40.525146 2208 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:15:40.525430 kubelet[2208]: E0117 00:15:40.525410 2208 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" Jan 17 00:15:40.526037 kubelet[2208]: I0117 00:15:40.526018 2208 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:15:40.527171 kubelet[2208]: I0117 00:15:40.527154 2208 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:15:40.529264 kubelet[2208]: E0117 00:15:40.529234 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:15:40.529468 kubelet[2208]: E0117 00:15:40.529441 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="200ms" Jan 17 00:15:40.531683 kubelet[2208]: I0117 00:15:40.531094 2208 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:15:40.531969 kubelet[2208]: I0117 00:15:40.531820 2208 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:15:40.533702 kubelet[2208]: E0117 00:15:40.533666 2208 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:15:40.536459 kubelet[2208]: I0117 00:15:40.536432 2208 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:15:40.560465 kubelet[2208]: I0117 00:15:40.560289 2208 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:15:40.561805 kubelet[2208]: I0117 00:15:40.561768 2208 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:15:40.561805 kubelet[2208]: I0117 00:15:40.561792 2208 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:15:40.561961 kubelet[2208]: I0117 00:15:40.561830 2208 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:15:40.561961 kubelet[2208]: E0117 00:15:40.561892 2208 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:15:40.571878 kubelet[2208]: E0117 00:15:40.571808 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:15:40.573611 kubelet[2208]: I0117 00:15:40.573273 2208 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:15:40.573611 kubelet[2208]: I0117 00:15:40.573296 2208 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:15:40.573611 kubelet[2208]: I0117 00:15:40.573317 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:40.576083 kubelet[2208]: I0117 00:15:40.575732 2208 policy_none.go:49] "None policy: Start" Jan 17 00:15:40.576083 kubelet[2208]: I0117 00:15:40.575769 2208 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:15:40.576083 kubelet[2208]: I0117 00:15:40.575790 2208 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:15:40.578795 kubelet[2208]: I0117 00:15:40.578125 2208 policy_none.go:47] "Start" Jan 17 00:15:40.583926 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:15:40.600841 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:15:40.605209 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:15:40.614011 kubelet[2208]: E0117 00:15:40.613973 2208 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:15:40.614950 kubelet[2208]: I0117 00:15:40.614247 2208 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:15:40.614950 kubelet[2208]: I0117 00:15:40.614270 2208 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:15:40.615804 kubelet[2208]: I0117 00:15:40.615672 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:15:40.616957 kubelet[2208]: E0117 00:15:40.616806 2208 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:15:40.616957 kubelet[2208]: E0117 00:15:40.616872 2208 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" Jan 17 00:15:40.688355 systemd[1]: Created slice kubepods-burstable-pod27ae9f233eeb85572a48b647c881ba4e.slice - libcontainer container kubepods-burstable-pod27ae9f233eeb85572a48b647c881ba4e.slice. Jan 17 00:15:40.697727 kubelet[2208]: E0117 00:15:40.697678 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.701920 systemd[1]: Created slice kubepods-burstable-pod3a98d89a22d020390fa301fcad6a02fe.slice - libcontainer container kubepods-burstable-pod3a98d89a22d020390fa301fcad6a02fe.slice. Jan 17 00:15:40.704999 kubelet[2208]: E0117 00:15:40.704952 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.708605 systemd[1]: Created slice kubepods-burstable-pod27796e07207e9ee24dff9ffb40bbf1df.slice - libcontainer container kubepods-burstable-pod27796e07207e9ee24dff9ffb40bbf1df.slice. Jan 17 00:15:40.710893 kubelet[2208]: E0117 00:15:40.710770 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.719387 kubelet[2208]: I0117 00:15:40.719360 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.719877 kubelet[2208]: E0117 00:15:40.719838 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.732728 kubelet[2208]: E0117 00:15:40.732687 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="400ms" Jan 17 00:15:40.829240 kubelet[2208]: I0117 00:15:40.829195 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829240 kubelet[2208]: I0117 00:15:40.829250 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27ae9f233eeb85572a48b647c881ba4e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27ae9f233eeb85572a48b647c881ba4e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829473 kubelet[2208]: I0117 00:15:40.829293 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829473 kubelet[2208]: I0117 00:15:40.829322 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27796e07207e9ee24dff9ffb40bbf1df-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27796e07207e9ee24dff9ffb40bbf1df\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829473 kubelet[2208]: I0117 00:15:40.829345 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27ae9f233eeb85572a48b647c881ba4e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27ae9f233eeb85572a48b647c881ba4e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829473 kubelet[2208]: I0117 00:15:40.829371 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27ae9f233eeb85572a48b647c881ba4e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27ae9f233eeb85572a48b647c881ba4e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829671 kubelet[2208]: I0117 00:15:40.829400 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829671 kubelet[2208]: I0117 00:15:40.829443 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.829671 kubelet[2208]: I0117 00:15:40.829474 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.924771 kubelet[2208]: I0117 00:15:40.924732 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:40.925475 kubelet[2208]: E0117 00:15:40.925193 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:41.001847 containerd[1459]: time="2026-01-17T00:15:41.001702670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,Uid:27ae9f233eeb85572a48b647c881ba4e,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:41.008571 containerd[1459]: time="2026-01-17T00:15:41.008222328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,Uid:3a98d89a22d020390fa301fcad6a02fe,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:41.014882 containerd[1459]: time="2026-01-17T00:15:41.014752429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,Uid:27796e07207e9ee24dff9ffb40bbf1df,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:41.133462 kubelet[2208]: E0117 00:15:41.133408 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="800ms" Jan 17 00:15:41.330434 kubelet[2208]: I0117 00:15:41.330379 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:41.330791 kubelet[2208]: E0117 00:15:41.330757 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:41.374702 kubelet[2208]: E0117 00:15:41.374654 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:15:41.402209 kubelet[2208]: E0117 00:15:41.402164 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:15:41.415404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333725058.mount: Deactivated successfully. Jan 17 00:15:41.424342 containerd[1459]: time="2026-01-17T00:15:41.424287379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:41.425366 containerd[1459]: time="2026-01-17T00:15:41.425187188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313054" Jan 17 00:15:41.426345 containerd[1459]: time="2026-01-17T00:15:41.426304973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:41.427492 containerd[1459]: time="2026-01-17T00:15:41.427444432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:41.428683 containerd[1459]: time="2026-01-17T00:15:41.428617329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:15:41.429604 containerd[1459]: time="2026-01-17T00:15:41.429551690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:15:41.430575 containerd[1459]: time="2026-01-17T00:15:41.430515654Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:41.434855 containerd[1459]: time="2026-01-17T00:15:41.434819367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:41.436247 containerd[1459]: time="2026-01-17T00:15:41.435982129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 434.174396ms" Jan 17 00:15:41.439767 containerd[1459]: time="2026-01-17T00:15:41.439724225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.373676ms" Jan 17 00:15:41.446850 containerd[1459]: time="2026-01-17T00:15:41.446792481Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.956872ms" Jan 17 00:15:41.617121 containerd[1459]: time="2026-01-17T00:15:41.615866235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:41.617121 containerd[1459]: time="2026-01-17T00:15:41.615945662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:41.617121 containerd[1459]: time="2026-01-17T00:15:41.615973770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:41.618148 containerd[1459]: time="2026-01-17T00:15:41.617621522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:41.627224 containerd[1459]: time="2026-01-17T00:15:41.626303496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:41.627224 containerd[1459]: time="2026-01-17T00:15:41.626365288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:41.627224 containerd[1459]: time="2026-01-17T00:15:41.626385017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:41.627224 containerd[1459]: time="2026-01-17T00:15:41.626483990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:41.639695 containerd[1459]: time="2026-01-17T00:15:41.639309096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:41.639695 containerd[1459]: time="2026-01-17T00:15:41.639372194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:41.639695 containerd[1459]: time="2026-01-17T00:15:41.639415835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:41.639695 containerd[1459]: time="2026-01-17T00:15:41.639547984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:41.673285 systemd[1]: Started cri-containerd-7248ccbc3ae0883cd6ffb21315f48922e8b22f0c09d4264e4dcc363339979244.scope - libcontainer container 7248ccbc3ae0883cd6ffb21315f48922e8b22f0c09d4264e4dcc363339979244. Jan 17 00:15:41.683991 systemd[1]: Started cri-containerd-bf8fa6af188054da923b5f7c079241444bb06c13c75ca0dc72f1181152aaff98.scope - libcontainer container bf8fa6af188054da923b5f7c079241444bb06c13c75ca0dc72f1181152aaff98. Jan 17 00:15:41.688594 systemd[1]: Started cri-containerd-c466aba1ff731fc4128e24f4752d48467cebf38a24b96271762da046f573d128.scope - libcontainer container c466aba1ff731fc4128e24f4752d48467cebf38a24b96271762da046f573d128. Jan 17 00:15:41.768390 containerd[1459]: time="2026-01-17T00:15:41.768335838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,Uid:27796e07207e9ee24dff9ffb40bbf1df,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf8fa6af188054da923b5f7c079241444bb06c13c75ca0dc72f1181152aaff98\"" Jan 17 00:15:41.772596 kubelet[2208]: E0117 00:15:41.772552 2208 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1" Jan 17 00:15:41.783129 kubelet[2208]: E0117 00:15:41.783077 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:15:41.783793 containerd[1459]: time="2026-01-17T00:15:41.783743376Z" level=info msg="CreateContainer within sandbox \"bf8fa6af188054da923b5f7c079241444bb06c13c75ca0dc72f1181152aaff98\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:15:41.816181 containerd[1459]: time="2026-01-17T00:15:41.814233893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,Uid:27ae9f233eeb85572a48b647c881ba4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c466aba1ff731fc4128e24f4752d48467cebf38a24b96271762da046f573d128\"" Jan 17 00:15:41.816181 containerd[1459]: time="2026-01-17T00:15:41.816070975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3,Uid:3a98d89a22d020390fa301fcad6a02fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"7248ccbc3ae0883cd6ffb21315f48922e8b22f0c09d4264e4dcc363339979244\"" Jan 17 00:15:41.818872 containerd[1459]: time="2026-01-17T00:15:41.817615548Z" level=info msg="CreateContainer within sandbox \"bf8fa6af188054da923b5f7c079241444bb06c13c75ca0dc72f1181152aaff98\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3c9c61da62b26c2f0e230262ce8a2bf6aea7d29911f53d4061877084a99ca33\"" Jan 17 00:15:41.818995 kubelet[2208]: E0117 00:15:41.818550 2208 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1" Jan 17 00:15:41.819456 kubelet[2208]: E0117 00:15:41.819424 2208 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dc" Jan 17 00:15:41.819581 containerd[1459]: time="2026-01-17T00:15:41.819434199Z" level=info msg="StartContainer for \"a3c9c61da62b26c2f0e230262ce8a2bf6aea7d29911f53d4061877084a99ca33\"" Jan 17 00:15:41.823350 containerd[1459]: time="2026-01-17T00:15:41.823262300Z" level=info msg="CreateContainer within sandbox \"c466aba1ff731fc4128e24f4752d48467cebf38a24b96271762da046f573d128\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:15:41.826912 containerd[1459]: time="2026-01-17T00:15:41.826875718Z" level=info msg="CreateContainer within sandbox \"7248ccbc3ae0883cd6ffb21315f48922e8b22f0c09d4264e4dcc363339979244\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:15:41.857260 containerd[1459]: time="2026-01-17T00:15:41.857195223Z" level=info msg="CreateContainer within sandbox \"c466aba1ff731fc4128e24f4752d48467cebf38a24b96271762da046f573d128\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"275b4a08e37f6e0614d77f62398f3d19a7858713c1cf0391da7aad4862de473a\"" Jan 17 00:15:41.857970 containerd[1459]: time="2026-01-17T00:15:41.857930653Z" level=info msg="StartContainer for \"275b4a08e37f6e0614d77f62398f3d19a7858713c1cf0391da7aad4862de473a\"" Jan 17 00:15:41.864691 containerd[1459]: time="2026-01-17T00:15:41.864310543Z" level=info msg="CreateContainer within sandbox \"7248ccbc3ae0883cd6ffb21315f48922e8b22f0c09d4264e4dcc363339979244\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"511294d0da126a4756e9409faa65c50ba524d4d584fbbb95c256f606c05e9454\"" Jan 17 00:15:41.867098 containerd[1459]: time="2026-01-17T00:15:41.866138965Z" level=info msg="StartContainer for \"511294d0da126a4756e9409faa65c50ba524d4d584fbbb95c256f606c05e9454\"" Jan 17 00:15:41.880883 systemd[1]: Started cri-containerd-a3c9c61da62b26c2f0e230262ce8a2bf6aea7d29911f53d4061877084a99ca33.scope - libcontainer container a3c9c61da62b26c2f0e230262ce8a2bf6aea7d29911f53d4061877084a99ca33. Jan 17 00:15:41.913279 systemd[1]: Started cri-containerd-275b4a08e37f6e0614d77f62398f3d19a7858713c1cf0391da7aad4862de473a.scope - libcontainer container 275b4a08e37f6e0614d77f62398f3d19a7858713c1cf0391da7aad4862de473a. Jan 17 00:15:41.935712 kubelet[2208]: E0117 00:15:41.935648 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="1.6s" Jan 17 00:15:41.947276 systemd[1]: Started cri-containerd-511294d0da126a4756e9409faa65c50ba524d4d584fbbb95c256f606c05e9454.scope - libcontainer container 511294d0da126a4756e9409faa65c50ba524d4d584fbbb95c256f606c05e9454. Jan 17 00:15:41.954709 kubelet[2208]: E0117 00:15:41.954552 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:15:42.003976 containerd[1459]: time="2026-01-17T00:15:42.003585366Z" level=info msg="StartContainer for \"a3c9c61da62b26c2f0e230262ce8a2bf6aea7d29911f53d4061877084a99ca33\" returns successfully" Jan 17 00:15:42.027920 containerd[1459]: time="2026-01-17T00:15:42.027871867Z" level=info msg="StartContainer for \"275b4a08e37f6e0614d77f62398f3d19a7858713c1cf0391da7aad4862de473a\" returns successfully" Jan 17 00:15:42.087517 containerd[1459]: time="2026-01-17T00:15:42.087366976Z" level=info msg="StartContainer for \"511294d0da126a4756e9409faa65c50ba524d4d584fbbb95c256f606c05e9454\" returns successfully" Jan 17 00:15:42.136861 kubelet[2208]: I0117 00:15:42.136735 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:42.592736 kubelet[2208]: E0117 00:15:42.592691 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:42.593323 kubelet[2208]: E0117 00:15:42.593289 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:42.610331 kubelet[2208]: E0117 00:15:42.610105 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:43.610847 kubelet[2208]: E0117 00:15:43.610804 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:43.611469 kubelet[2208]: E0117 00:15:43.611274 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:44.613522 kubelet[2208]: E0117 00:15:44.613467 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.401669 kubelet[2208]: E0117 00:15:45.401622 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.405105 kubelet[2208]: I0117 00:15:45.405067 2208 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.405241 kubelet[2208]: E0117 00:15:45.405111 2208 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\": node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" not found" Jan 17 00:15:45.426546 kubelet[2208]: I0117 00:15:45.426485 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.478330 kubelet[2208]: E0117 00:15:45.478282 2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.478497 kubelet[2208]: I0117 00:15:45.478345 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.484195 kubelet[2208]: E0117 00:15:45.484146 2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.484195 kubelet[2208]: I0117 00:15:45.484193 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.489012 kubelet[2208]: E0117 00:15:45.488969 2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:45.492227 kubelet[2208]: I0117 00:15:45.492196 2208 apiserver.go:52] "Watching apiserver" Jan 17 00:15:45.527524 kubelet[2208]: I0117 00:15:45.527489 2208 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:15:47.307016 systemd[1]: Reloading requested from client PID 2490 ('systemctl') (unit session-9.scope)... Jan 17 00:15:47.307069 systemd[1]: Reloading... Jan 17 00:15:47.465087 zram_generator::config[2534]: No configuration found. Jan 17 00:15:47.593436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:47.720853 systemd[1]: Reloading finished in 413 ms. Jan 17 00:15:47.774844 kubelet[2208]: I0117 00:15:47.774630 2208 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:47.774644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:47.785516 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:15:47.785832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:47.785916 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 127.4M memory peak, 0B memory swap peak. Jan 17 00:15:47.790960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:48.148508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:48.162809 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:15:48.233856 kubelet[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:15:48.233856 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:48.233856 kubelet[2578]: I0117 00:15:48.233808 2578 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:15:48.243881 kubelet[2578]: I0117 00:15:48.243828 2578 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:15:48.243881 kubelet[2578]: I0117 00:15:48.243857 2578 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:15:48.243881 kubelet[2578]: I0117 00:15:48.243893 2578 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:15:48.244169 kubelet[2578]: I0117 00:15:48.243909 2578 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:15:48.244237 kubelet[2578]: I0117 00:15:48.244219 2578 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:15:48.245701 kubelet[2578]: I0117 00:15:48.245663 2578 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:15:48.248674 kubelet[2578]: I0117 00:15:48.248008 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:48.251906 kubelet[2578]: E0117 00:15:48.251873 2578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:15:48.252023 kubelet[2578]: I0117 00:15:48.251931 2578 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:15:48.255028 kubelet[2578]: I0117 00:15:48.254985 2578 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:15:48.255372 kubelet[2578]: I0117 00:15:48.255330 2578 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:15:48.255568 kubelet[2578]: I0117 00:15:48.255363 2578 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:15:48.255733 kubelet[2578]: I0117 00:15:48.255569 2578 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:15:48.255733 kubelet[2578]: I0117 00:15:48.255587 2578 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:15:48.255733 kubelet[2578]: I0117 00:15:48.255619 2578 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:15:48.256974 kubelet[2578]: I0117 00:15:48.256952 2578 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:48.257244 kubelet[2578]: I0117 00:15:48.257211 2578 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:15:48.258303 kubelet[2578]: I0117 00:15:48.257254 2578 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:15:48.258303 kubelet[2578]: I0117 00:15:48.257297 2578 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:15:48.258303 kubelet[2578]: I0117 00:15:48.257319 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:15:48.259242 kubelet[2578]: I0117 00:15:48.259217 2578 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:15:48.264985 kubelet[2578]: I0117 00:15:48.264951 2578 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:15:48.265246 kubelet[2578]: I0117 00:15:48.265224 2578 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:15:48.309077 kubelet[2578]: I0117 00:15:48.309003 2578 server.go:1262] "Started kubelet" Jan 17 00:15:48.310083 kubelet[2578]: I0117 00:15:48.309985 2578 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:15:48.311882 kubelet[2578]: I0117 00:15:48.311202 2578 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:15:48.311991 kubelet[2578]: I0117 00:15:48.311329 2578 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:15:48.313458 kubelet[2578]: I0117 00:15:48.312293 2578 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:15:48.314378 kubelet[2578]: I0117 00:15:48.314346 2578 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:15:48.318754 kubelet[2578]: I0117 00:15:48.318712 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:15:48.320385 kubelet[2578]: I0117 00:15:48.319803 2578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:15:48.331251 kubelet[2578]: E0117 00:15:48.331220 2578 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:15:48.334063 kubelet[2578]: I0117 00:15:48.334003 2578 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:15:48.334322 kubelet[2578]: I0117 00:15:48.334301 2578 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:15:48.334614 kubelet[2578]: I0117 00:15:48.334596 2578 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:15:48.337151 kubelet[2578]: I0117 00:15:48.335601 2578 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:15:48.338620 kubelet[2578]: I0117 00:15:48.337399 2578 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:15:48.340277 kubelet[2578]: I0117 00:15:48.340254 2578 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:15:48.345110 kubelet[2578]: I0117 00:15:48.344291 2578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:15:48.346869 kubelet[2578]: I0117 00:15:48.346145 2578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:15:48.346869 kubelet[2578]: I0117 00:15:48.346176 2578 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:15:48.346869 kubelet[2578]: I0117 00:15:48.346204 2578 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:15:48.346869 kubelet[2578]: E0117 00:15:48.346256 2578 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:15:48.424761 kubelet[2578]: I0117 00:15:48.424374 2578 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:15:48.424761 kubelet[2578]: I0117 00:15:48.424398 2578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:15:48.424761 kubelet[2578]: I0117 00:15:48.424423 2578 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:48.427172 kubelet[2578]: I0117 00:15:48.427125 2578 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:15:48.427296 kubelet[2578]: I0117 00:15:48.427159 2578 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:15:48.427296 kubelet[2578]: I0117 00:15:48.427195 2578 policy_none.go:49] "None policy: Start" Jan 17 00:15:48.427296 kubelet[2578]: I0117 00:15:48.427209 2578 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:15:48.427296 kubelet[2578]: I0117 00:15:48.427229 2578 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:15:48.427502 kubelet[2578]: I0117 00:15:48.427395 2578 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:15:48.427502 kubelet[2578]: I0117 00:15:48.427409 2578 policy_none.go:47] "Start" Jan 17 00:15:48.437294 kubelet[2578]: E0117 00:15:48.436368 2578 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:15:48.437433 kubelet[2578]: I0117 00:15:48.437413 2578 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:15:48.437501 kubelet[2578]: I0117 00:15:48.437435 2578 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:15:48.439400 kubelet[2578]: I0117 00:15:48.439380 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:15:48.443000 kubelet[2578]: E0117 00:15:48.442964 2578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:15:48.448535 kubelet[2578]: I0117 00:15:48.448501 2578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.450973 kubelet[2578]: I0117 00:15:48.449819 2578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.451716 kubelet[2578]: I0117 00:15:48.451635 2578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.464663 kubelet[2578]: I0117 00:15:48.463497 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:15:48.471410 kubelet[2578]: I0117 00:15:48.470863 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:15:48.471410 kubelet[2578]: I0117 00:15:48.471155 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:15:48.552801 kubelet[2578]: I0117 00:15:48.552764 2578 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.565737 kubelet[2578]: I0117 00:15:48.565701 2578 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.566302 kubelet[2578]: I0117 00:15:48.565800 2578 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.635654 kubelet[2578]: I0117 00:15:48.635433 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27796e07207e9ee24dff9ffb40bbf1df-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27796e07207e9ee24dff9ffb40bbf1df\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.635654 kubelet[2578]: I0117 00:15:48.635508 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27ae9f233eeb85572a48b647c881ba4e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27ae9f233eeb85572a48b647c881ba4e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.635654 kubelet[2578]: I0117 00:15:48.635545 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27ae9f233eeb85572a48b647c881ba4e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27ae9f233eeb85572a48b647c881ba4e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.635654 kubelet[2578]: I0117 00:15:48.635576 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.636010 kubelet[2578]: I0117 00:15:48.635609 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.636010 kubelet[2578]: I0117 00:15:48.635636 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.636010 kubelet[2578]: I0117 00:15:48.635672 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27ae9f233eeb85572a48b647c881ba4e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"27ae9f233eeb85572a48b647c881ba4e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.636010 kubelet[2578]: I0117 00:15:48.635706 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:48.636168 kubelet[2578]: I0117 00:15:48.635734 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a98d89a22d020390fa301fcad6a02fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" (UID: \"3a98d89a22d020390fa301fcad6a02fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:49.258423 kubelet[2578]: I0117 00:15:49.258301 2578 apiserver.go:52] "Watching apiserver" Jan 17 00:15:49.337181 kubelet[2578]: I0117 00:15:49.337139 2578 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:15:49.383113 kubelet[2578]: I0117 00:15:49.383078 2578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:49.394150 kubelet[2578]: I0117 00:15:49.393269 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:15:49.394150 kubelet[2578]: E0117 00:15:49.393334 2578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:15:49.432567 kubelet[2578]: I0117 00:15:49.432499 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" podStartSLOduration=1.432418708 podStartE2EDuration="1.432418708s" podCreationTimestamp="2026-01-17 00:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:49.418301534 +0000 UTC m=+1.250730238" watchObservedRunningTime="2026-01-17 00:15:49.432418708 +0000 UTC m=+1.264847413" Jan 17 00:15:49.449277 kubelet[2578]: I0117 00:15:49.449186 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" podStartSLOduration=1.449165916 podStartE2EDuration="1.449165916s" podCreationTimestamp="2026-01-17 00:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:49.433233394 +0000 UTC m=+1.265662097" watchObservedRunningTime="2026-01-17 00:15:49.449165916 +0000 UTC m=+1.281594623" Jan 17 00:15:49.462769 kubelet[2578]: I0117 00:15:49.462705 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" podStartSLOduration=1.462684772 podStartE2EDuration="1.462684772s" podCreationTimestamp="2026-01-17 00:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:49.449531707 +0000 UTC m=+1.281960412" watchObservedRunningTime="2026-01-17 00:15:49.462684772 +0000 UTC m=+1.295113465" Jan 17 00:15:49.747296 update_engine[1452]: I20260117 00:15:49.747186 1452 update_attempter.cc:509] Updating boot flags... Jan 17 00:15:49.815222 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2634) Jan 17 00:15:49.938074 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2633) Jan 17 00:15:53.716785 kubelet[2578]: I0117 00:15:53.716744 2578 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:15:53.717515 containerd[1459]: time="2026-01-17T00:15:53.717216900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:15:53.717968 kubelet[2578]: I0117 00:15:53.717545 2578 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:15:54.409866 systemd[1]: Created slice kubepods-besteffort-pod1711dcd1_43bc_42ae_b0db_9c45f27c1ddd.slice - libcontainer container kubepods-besteffort-pod1711dcd1_43bc_42ae_b0db_9c45f27c1ddd.slice. Jan 17 00:15:54.471556 kubelet[2578]: I0117 00:15:54.471201 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1711dcd1-43bc-42ae-b0db-9c45f27c1ddd-kube-proxy\") pod \"kube-proxy-bg5nq\" (UID: \"1711dcd1-43bc-42ae-b0db-9c45f27c1ddd\") " pod="kube-system/kube-proxy-bg5nq" Jan 17 00:15:54.471556 kubelet[2578]: I0117 00:15:54.471359 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlk6\" (UniqueName: \"kubernetes.io/projected/1711dcd1-43bc-42ae-b0db-9c45f27c1ddd-kube-api-access-grlk6\") pod \"kube-proxy-bg5nq\" (UID: \"1711dcd1-43bc-42ae-b0db-9c45f27c1ddd\") " pod="kube-system/kube-proxy-bg5nq" Jan 17 00:15:54.471556 kubelet[2578]: I0117 00:15:54.471398 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1711dcd1-43bc-42ae-b0db-9c45f27c1ddd-xtables-lock\") pod \"kube-proxy-bg5nq\" (UID: \"1711dcd1-43bc-42ae-b0db-9c45f27c1ddd\") " pod="kube-system/kube-proxy-bg5nq" Jan 17 00:15:54.471556 kubelet[2578]: I0117 00:15:54.471448 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1711dcd1-43bc-42ae-b0db-9c45f27c1ddd-lib-modules\") pod \"kube-proxy-bg5nq\" (UID: \"1711dcd1-43bc-42ae-b0db-9c45f27c1ddd\") " pod="kube-system/kube-proxy-bg5nq" Jan 17 00:15:54.583870 kubelet[2578]: E0117 00:15:54.583820 2578 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:15:54.584055 kubelet[2578]: E0117 00:15:54.583884 2578 projected.go:196] Error preparing data for projected volume kube-api-access-grlk6 for pod kube-system/kube-proxy-bg5nq: configmap "kube-root-ca.crt" not found Jan 17 00:15:54.584055 kubelet[2578]: E0117 00:15:54.583992 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1711dcd1-43bc-42ae-b0db-9c45f27c1ddd-kube-api-access-grlk6 podName:1711dcd1-43bc-42ae-b0db-9c45f27c1ddd nodeName:}" failed. No retries permitted until 2026-01-17 00:15:55.083963056 +0000 UTC m=+6.916391759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-grlk6" (UniqueName: "kubernetes.io/projected/1711dcd1-43bc-42ae-b0db-9c45f27c1ddd-kube-api-access-grlk6") pod "kube-proxy-bg5nq" (UID: "1711dcd1-43bc-42ae-b0db-9c45f27c1ddd") : configmap "kube-root-ca.crt" not found Jan 17 00:15:54.893204 systemd[1]: Created slice kubepods-besteffort-pod4f932c04_f2bd_44ab_aff3_c733d9805f46.slice - libcontainer container kubepods-besteffort-pod4f932c04_f2bd_44ab_aff3_c733d9805f46.slice. Jan 17 00:15:54.975775 kubelet[2578]: I0117 00:15:54.975647 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f932c04-f2bd-44ab-aff3-c733d9805f46-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-lgvn7\" (UID: \"4f932c04-f2bd-44ab-aff3-c733d9805f46\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lgvn7" Jan 17 00:15:54.975775 kubelet[2578]: I0117 00:15:54.975716 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92nd6\" (UniqueName: \"kubernetes.io/projected/4f932c04-f2bd-44ab-aff3-c733d9805f46-kube-api-access-92nd6\") pod \"tigera-operator-65cdcdfd6d-lgvn7\" (UID: \"4f932c04-f2bd-44ab-aff3-c733d9805f46\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lgvn7" Jan 17 00:15:55.204538 containerd[1459]: time="2026-01-17T00:15:55.204402723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lgvn7,Uid:4f932c04-f2bd-44ab-aff3-c733d9805f46,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:15:55.247338 containerd[1459]: time="2026-01-17T00:15:55.246864148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:55.247338 containerd[1459]: time="2026-01-17T00:15:55.246957453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:55.247338 containerd[1459]: time="2026-01-17T00:15:55.247055042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:55.247338 containerd[1459]: time="2026-01-17T00:15:55.247226128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:55.287274 systemd[1]: Started cri-containerd-7424c6db66a34501c12e6f2d0622f031b0b025be10227fcbc8049ce348726e47.scope - libcontainer container 7424c6db66a34501c12e6f2d0622f031b0b025be10227fcbc8049ce348726e47. Jan 17 00:15:55.326555 containerd[1459]: time="2026-01-17T00:15:55.325956606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bg5nq,Uid:1711dcd1-43bc-42ae-b0db-9c45f27c1ddd,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:55.354329 containerd[1459]: time="2026-01-17T00:15:55.354284289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lgvn7,Uid:4f932c04-f2bd-44ab-aff3-c733d9805f46,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7424c6db66a34501c12e6f2d0622f031b0b025be10227fcbc8049ce348726e47\"" Jan 17 00:15:55.362666 containerd[1459]: time="2026-01-17T00:15:55.362600263Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:15:55.370269 containerd[1459]: time="2026-01-17T00:15:55.369993133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:55.370269 containerd[1459]: time="2026-01-17T00:15:55.370067532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:55.370269 containerd[1459]: time="2026-01-17T00:15:55.370080812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:55.370269 containerd[1459]: time="2026-01-17T00:15:55.370176530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:55.391296 systemd[1]: Started cri-containerd-24fb80040c33557fd54939b17dde80fd46288a9c6eea4e4d5f3b9a5bcf4a0f45.scope - libcontainer container 24fb80040c33557fd54939b17dde80fd46288a9c6eea4e4d5f3b9a5bcf4a0f45. Jan 17 00:15:55.427845 containerd[1459]: time="2026-01-17T00:15:55.427721132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bg5nq,Uid:1711dcd1-43bc-42ae-b0db-9c45f27c1ddd,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fb80040c33557fd54939b17dde80fd46288a9c6eea4e4d5f3b9a5bcf4a0f45\"" Jan 17 00:15:55.434554 containerd[1459]: time="2026-01-17T00:15:55.434487071Z" level=info msg="CreateContainer within sandbox \"24fb80040c33557fd54939b17dde80fd46288a9c6eea4e4d5f3b9a5bcf4a0f45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:15:55.451370 containerd[1459]: time="2026-01-17T00:15:55.451226207Z" level=info msg="CreateContainer within sandbox \"24fb80040c33557fd54939b17dde80fd46288a9c6eea4e4d5f3b9a5bcf4a0f45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4db2f74b139bc1fe5fc8f6ea55ff6fd109efe217500cb221319288dd9d11b7c\"" Jan 17 00:15:55.452332 containerd[1459]: time="2026-01-17T00:15:55.452136120Z" level=info msg="StartContainer for \"e4db2f74b139bc1fe5fc8f6ea55ff6fd109efe217500cb221319288dd9d11b7c\"" Jan 17 00:15:55.499274 systemd[1]: Started cri-containerd-e4db2f74b139bc1fe5fc8f6ea55ff6fd109efe217500cb221319288dd9d11b7c.scope - libcontainer container e4db2f74b139bc1fe5fc8f6ea55ff6fd109efe217500cb221319288dd9d11b7c. Jan 17 00:15:55.538796 containerd[1459]: time="2026-01-17T00:15:55.538329948Z" level=info msg="StartContainer for \"e4db2f74b139bc1fe5fc8f6ea55ff6fd109efe217500cb221319288dd9d11b7c\" returns successfully" Jan 17 00:15:56.096824 systemd[1]: run-containerd-runc-k8s.io-7424c6db66a34501c12e6f2d0622f031b0b025be10227fcbc8049ce348726e47-runc.xcB40V.mount: Deactivated successfully. Jan 17 00:15:56.533339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286996484.mount: Deactivated successfully. Jan 17 00:15:57.014507 kubelet[2578]: I0117 00:15:57.013708 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bg5nq" podStartSLOduration=3.013686459 podStartE2EDuration="3.013686459s" podCreationTimestamp="2026-01-17 00:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:56.48208614 +0000 UTC m=+8.314514846" watchObservedRunningTime="2026-01-17 00:15:57.013686459 +0000 UTC m=+8.846115165" Jan 17 00:15:58.069080 containerd[1459]: time="2026-01-17T00:15:58.067573142Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:58.070014 containerd[1459]: time="2026-01-17T00:15:58.069863763Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:15:58.070284 containerd[1459]: time="2026-01-17T00:15:58.070245327Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:58.073245 containerd[1459]: time="2026-01-17T00:15:58.073197912Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:58.074278 containerd[1459]: time="2026-01-17T00:15:58.074231947Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.711327815s" Jan 17 00:15:58.074371 containerd[1459]: time="2026-01-17T00:15:58.074284032Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:15:58.079744 containerd[1459]: time="2026-01-17T00:15:58.079704166Z" level=info msg="CreateContainer within sandbox \"7424c6db66a34501c12e6f2d0622f031b0b025be10227fcbc8049ce348726e47\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:15:58.095901 containerd[1459]: time="2026-01-17T00:15:58.095839264Z" level=info msg="CreateContainer within sandbox \"7424c6db66a34501c12e6f2d0622f031b0b025be10227fcbc8049ce348726e47\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4e466e828acc3b4de3c99625ec04f329b3f41a81ae0528f4473b4eb9e35c59f9\"" Jan 17 00:15:58.096976 containerd[1459]: time="2026-01-17T00:15:58.096938068Z" level=info msg="StartContainer for \"4e466e828acc3b4de3c99625ec04f329b3f41a81ae0528f4473b4eb9e35c59f9\"" Jan 17 00:15:58.146274 systemd[1]: Started cri-containerd-4e466e828acc3b4de3c99625ec04f329b3f41a81ae0528f4473b4eb9e35c59f9.scope - libcontainer container 4e466e828acc3b4de3c99625ec04f329b3f41a81ae0528f4473b4eb9e35c59f9. Jan 17 00:15:58.190255 containerd[1459]: time="2026-01-17T00:15:58.190077929Z" level=info msg="StartContainer for \"4e466e828acc3b4de3c99625ec04f329b3f41a81ae0528f4473b4eb9e35c59f9\" returns successfully" Jan 17 00:15:58.430724 kubelet[2578]: I0117 00:15:58.430412 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-lgvn7" podStartSLOduration=1.712954538 podStartE2EDuration="4.430370766s" podCreationTimestamp="2026-01-17 00:15:54 +0000 UTC" firstStartedPulling="2026-01-17 00:15:55.358129717 +0000 UTC m=+7.190558400" lastFinishedPulling="2026-01-17 00:15:58.075545945 +0000 UTC m=+9.907974628" observedRunningTime="2026-01-17 00:15:58.429760664 +0000 UTC m=+10.262189372" watchObservedRunningTime="2026-01-17 00:15:58.430370766 +0000 UTC m=+10.262799473" Jan 17 00:16:03.330303 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 17 00:16:03.364938 sshd[1732]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:03.380258 systemd[1]: sshd@8-10.128.0.91:22-4.153.228.146:59302.service: Deactivated successfully. Jan 17 00:16:03.384022 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:16:03.386018 systemd[1]: session-9.scope: Consumed 7.176s CPU time, 161.7M memory peak, 0B memory swap peak. Jan 17 00:16:03.388138 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:16:03.391725 systemd-logind[1449]: Removed session 9. Jan 17 00:16:10.981277 kubelet[2578]: I0117 00:16:10.981231 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwzf\" (UniqueName: \"kubernetes.io/projected/911a38b1-a2d2-4058-8082-ac7beca57988-kube-api-access-6cwzf\") pod \"calico-typha-6bb455c45c-9rq4z\" (UID: \"911a38b1-a2d2-4058-8082-ac7beca57988\") " pod="calico-system/calico-typha-6bb455c45c-9rq4z" Jan 17 00:16:10.981789 kubelet[2578]: I0117 00:16:10.981310 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/911a38b1-a2d2-4058-8082-ac7beca57988-typha-certs\") pod \"calico-typha-6bb455c45c-9rq4z\" (UID: \"911a38b1-a2d2-4058-8082-ac7beca57988\") " pod="calico-system/calico-typha-6bb455c45c-9rq4z" Jan 17 00:16:10.981789 kubelet[2578]: I0117 00:16:10.981349 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/911a38b1-a2d2-4058-8082-ac7beca57988-tigera-ca-bundle\") pod \"calico-typha-6bb455c45c-9rq4z\" (UID: \"911a38b1-a2d2-4058-8082-ac7beca57988\") " pod="calico-system/calico-typha-6bb455c45c-9rq4z" Jan 17 00:16:10.990868 systemd[1]: Created slice kubepods-besteffort-pod911a38b1_a2d2_4058_8082_ac7beca57988.slice - libcontainer container kubepods-besteffort-pod911a38b1_a2d2_4058_8082_ac7beca57988.slice. Jan 17 00:16:11.252573 systemd[1]: Created slice kubepods-besteffort-pod9b78d874_8d1b_43c4_ab06_286a3a669929.slice - libcontainer container kubepods-besteffort-pod9b78d874_8d1b_43c4_ab06_286a3a669929.slice. Jan 17 00:16:11.285274 kubelet[2578]: I0117 00:16:11.285188 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-flexvol-driver-host\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.285274 kubelet[2578]: I0117 00:16:11.285261 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-var-run-calico\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.285644 kubelet[2578]: I0117 00:16:11.285294 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-cni-bin-dir\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.285644 kubelet[2578]: I0117 00:16:11.285317 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-cni-net-dir\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.285644 kubelet[2578]: I0117 00:16:11.285344 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-xtables-lock\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.285644 kubelet[2578]: I0117 00:16:11.285369 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b78d874-8d1b-43c4-ab06-286a3a669929-tigera-ca-bundle\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.285644 kubelet[2578]: I0117 00:16:11.285393 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-lib-modules\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.286510 kubelet[2578]: I0117 00:16:11.285417 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-policysync\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.286510 kubelet[2578]: I0117 00:16:11.285441 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-var-lib-calico\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.286510 kubelet[2578]: I0117 00:16:11.285467 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vcr\" (UniqueName: \"kubernetes.io/projected/9b78d874-8d1b-43c4-ab06-286a3a669929-kube-api-access-c8vcr\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.286510 kubelet[2578]: I0117 00:16:11.285492 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b78d874-8d1b-43c4-ab06-286a3a669929-cni-log-dir\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.286510 kubelet[2578]: I0117 00:16:11.285523 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b78d874-8d1b-43c4-ab06-286a3a669929-node-certs\") pod \"calico-node-csq5g\" (UID: \"9b78d874-8d1b-43c4-ab06-286a3a669929\") " pod="calico-system/calico-node-csq5g" Jan 17 00:16:11.302269 containerd[1459]: time="2026-01-17T00:16:11.301214578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb455c45c-9rq4z,Uid:911a38b1-a2d2-4058-8082-ac7beca57988,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:11.338528 containerd[1459]: time="2026-01-17T00:16:11.338340592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:11.338528 containerd[1459]: time="2026-01-17T00:16:11.338429886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:11.338803 containerd[1459]: time="2026-01-17T00:16:11.338458874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:11.338803 containerd[1459]: time="2026-01-17T00:16:11.338710264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:11.379456 systemd[1]: Started cri-containerd-c912e5f4294168552c30cbf8b845cbaec6427cd47227691cf1179e9fad26886a.scope - libcontainer container c912e5f4294168552c30cbf8b845cbaec6427cd47227691cf1179e9fad26886a. Jan 17 00:16:11.393390 kubelet[2578]: E0117 00:16:11.392958 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.393390 kubelet[2578]: W0117 00:16:11.392988 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.393390 kubelet[2578]: E0117 00:16:11.393221 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.396377 kubelet[2578]: E0117 00:16:11.395625 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.396377 kubelet[2578]: W0117 00:16:11.395850 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.396377 kubelet[2578]: E0117 00:16:11.395880 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.400113 kubelet[2578]: E0117 00:16:11.399957 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.400113 kubelet[2578]: W0117 00:16:11.399982 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.400113 kubelet[2578]: E0117 00:16:11.400002 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.400577 kubelet[2578]: E0117 00:16:11.400538 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.400577 kubelet[2578]: W0117 00:16:11.400561 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.401168 kubelet[2578]: E0117 00:16:11.400582 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.401168 kubelet[2578]: E0117 00:16:11.401084 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.401168 kubelet[2578]: W0117 00:16:11.401100 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.401168 kubelet[2578]: E0117 00:16:11.401118 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.402250 kubelet[2578]: E0117 00:16:11.401550 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.402250 kubelet[2578]: W0117 00:16:11.401565 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.402250 kubelet[2578]: E0117 00:16:11.401581 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.402250 kubelet[2578]: E0117 00:16:11.402082 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.402250 kubelet[2578]: W0117 00:16:11.402099 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.402250 kubelet[2578]: E0117 00:16:11.402116 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.402579 kubelet[2578]: E0117 00:16:11.402485 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.402579 kubelet[2578]: W0117 00:16:11.402500 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.402579 kubelet[2578]: E0117 00:16:11.402516 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.403576 kubelet[2578]: E0117 00:16:11.402835 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.403576 kubelet[2578]: W0117 00:16:11.402854 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.403576 kubelet[2578]: E0117 00:16:11.402870 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.403576 kubelet[2578]: E0117 00:16:11.403343 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.403576 kubelet[2578]: W0117 00:16:11.403358 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.403576 kubelet[2578]: E0117 00:16:11.403374 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.421105 kubelet[2578]: E0117 00:16:11.420891 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.421105 kubelet[2578]: W0117 00:16:11.420912 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.421105 kubelet[2578]: E0117 00:16:11.420982 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.441451 kubelet[2578]: E0117 00:16:11.439891 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:11.502441 kubelet[2578]: E0117 00:16:11.502395 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.502441 kubelet[2578]: W0117 00:16:11.502435 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.502676 kubelet[2578]: E0117 00:16:11.502467 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.503286 kubelet[2578]: E0117 00:16:11.502983 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.503286 kubelet[2578]: W0117 00:16:11.503005 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.503286 kubelet[2578]: E0117 00:16:11.503026 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.505898 kubelet[2578]: E0117 00:16:11.505271 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.505898 kubelet[2578]: W0117 00:16:11.505292 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.505898 kubelet[2578]: E0117 00:16:11.505311 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.505898 kubelet[2578]: E0117 00:16:11.505690 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.505898 kubelet[2578]: W0117 00:16:11.505706 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.505898 kubelet[2578]: E0117 00:16:11.505725 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.508444 kubelet[2578]: E0117 00:16:11.507535 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.508444 kubelet[2578]: W0117 00:16:11.507555 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.508444 kubelet[2578]: E0117 00:16:11.507573 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.508444 kubelet[2578]: E0117 00:16:11.508352 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.508444 kubelet[2578]: W0117 00:16:11.508368 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.508444 kubelet[2578]: E0117 00:16:11.508385 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.508824 kubelet[2578]: E0117 00:16:11.508790 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.508824 kubelet[2578]: W0117 00:16:11.508806 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.508824 kubelet[2578]: E0117 00:16:11.508821 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.509156 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.510984 kubelet[2578]: W0117 00:16:11.509174 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.509191 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.509559 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.510984 kubelet[2578]: W0117 00:16:11.509573 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.509590 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.509938 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.510984 kubelet[2578]: W0117 00:16:11.509953 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.509969 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.510984 kubelet[2578]: E0117 00:16:11.510330 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.511527 kubelet[2578]: W0117 00:16:11.510343 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.511527 kubelet[2578]: E0117 00:16:11.510361 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.513182 kubelet[2578]: E0117 00:16:11.512234 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.513182 kubelet[2578]: W0117 00:16:11.512266 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.513182 kubelet[2578]: E0117 00:16:11.512286 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.513391 kubelet[2578]: E0117 00:16:11.513313 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.513391 kubelet[2578]: W0117 00:16:11.513328 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.513391 kubelet[2578]: E0117 00:16:11.513345 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.514575 kubelet[2578]: E0117 00:16:11.514159 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.514575 kubelet[2578]: W0117 00:16:11.514177 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.514575 kubelet[2578]: E0117 00:16:11.514194 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.518173 kubelet[2578]: E0117 00:16:11.518145 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.518173 kubelet[2578]: W0117 00:16:11.518173 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.518326 kubelet[2578]: E0117 00:16:11.518190 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.519071 kubelet[2578]: E0117 00:16:11.518519 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.519071 kubelet[2578]: W0117 00:16:11.518538 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.519071 kubelet[2578]: E0117 00:16:11.518554 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.521604 kubelet[2578]: E0117 00:16:11.521190 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.521604 kubelet[2578]: W0117 00:16:11.521211 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.521604 kubelet[2578]: E0117 00:16:11.521230 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.523870 kubelet[2578]: E0117 00:16:11.523344 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.523870 kubelet[2578]: W0117 00:16:11.523364 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.523870 kubelet[2578]: E0117 00:16:11.523380 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.525363 kubelet[2578]: E0117 00:16:11.525319 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.525363 kubelet[2578]: W0117 00:16:11.525338 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.525363 kubelet[2578]: E0117 00:16:11.525356 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.526368 kubelet[2578]: E0117 00:16:11.526318 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.526368 kubelet[2578]: W0117 00:16:11.526338 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.526368 kubelet[2578]: E0117 00:16:11.526357 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.537307 containerd[1459]: time="2026-01-17T00:16:11.537214993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb455c45c-9rq4z,Uid:911a38b1-a2d2-4058-8082-ac7beca57988,Namespace:calico-system,Attempt:0,} returns sandbox id \"c912e5f4294168552c30cbf8b845cbaec6427cd47227691cf1179e9fad26886a\"" Jan 17 00:16:11.542917 containerd[1459]: time="2026-01-17T00:16:11.542881492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:16:11.561901 containerd[1459]: time="2026-01-17T00:16:11.561848727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-csq5g,Uid:9b78d874-8d1b-43c4-ab06-286a3a669929,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:11.598750 kubelet[2578]: E0117 00:16:11.598591 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.598750 kubelet[2578]: W0117 00:16:11.598751 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.598972 kubelet[2578]: E0117 00:16:11.598814 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.599413 kubelet[2578]: I0117 00:16:11.599374 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624-socket-dir\") pod \"csi-node-driver-49lv6\" (UID: \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\") " pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:11.601072 kubelet[2578]: E0117 00:16:11.600406 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.601072 kubelet[2578]: W0117 00:16:11.600479 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.601072 kubelet[2578]: E0117 00:16:11.600507 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.601689 kubelet[2578]: E0117 00:16:11.601658 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.601828 kubelet[2578]: W0117 00:16:11.601712 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.601828 kubelet[2578]: E0117 00:16:11.601735 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.604067 kubelet[2578]: E0117 00:16:11.603027 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.604067 kubelet[2578]: W0117 00:16:11.603277 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.604067 kubelet[2578]: E0117 00:16:11.603306 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.604067 kubelet[2578]: I0117 00:16:11.603497 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgmm\" (UniqueName: \"kubernetes.io/projected/0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624-kube-api-access-sxgmm\") pod \"csi-node-driver-49lv6\" (UID: \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\") " pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:11.604965 kubelet[2578]: E0117 00:16:11.604935 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.604965 kubelet[2578]: W0117 00:16:11.604961 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.605161 kubelet[2578]: E0117 00:16:11.605000 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.605972 kubelet[2578]: E0117 00:16:11.605942 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.606520 kubelet[2578]: W0117 00:16:11.606184 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.606520 kubelet[2578]: E0117 00:16:11.606217 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.610150 kubelet[2578]: E0117 00:16:11.610129 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.610447 kubelet[2578]: W0117 00:16:11.610274 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.610447 kubelet[2578]: E0117 00:16:11.610308 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.610447 kubelet[2578]: I0117 00:16:11.610351 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624-kubelet-dir\") pod \"csi-node-driver-49lv6\" (UID: \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\") " pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:11.612066 kubelet[2578]: E0117 00:16:11.611085 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.612066 kubelet[2578]: W0117 00:16:11.611105 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.612066 kubelet[2578]: E0117 00:16:11.611124 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.612066 kubelet[2578]: I0117 00:16:11.611164 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624-registration-dir\") pod \"csi-node-driver-49lv6\" (UID: \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\") " pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:11.613357 kubelet[2578]: E0117 00:16:11.613151 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.613357 kubelet[2578]: W0117 00:16:11.613173 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.613357 kubelet[2578]: E0117 00:16:11.613191 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.613357 kubelet[2578]: I0117 00:16:11.613224 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624-varrun\") pod \"csi-node-driver-49lv6\" (UID: \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\") " pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:11.615166 kubelet[2578]: E0117 00:16:11.615144 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.615430 kubelet[2578]: W0117 00:16:11.615300 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.615430 kubelet[2578]: E0117 00:16:11.615329 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.619386 kubelet[2578]: E0117 00:16:11.619171 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.619386 kubelet[2578]: W0117 00:16:11.619191 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.619386 kubelet[2578]: E0117 00:16:11.619210 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.619838 kubelet[2578]: E0117 00:16:11.619655 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.619838 kubelet[2578]: W0117 00:16:11.619674 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.619838 kubelet[2578]: E0117 00:16:11.619692 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.621159 kubelet[2578]: E0117 00:16:11.620201 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.621159 kubelet[2578]: W0117 00:16:11.620216 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.621159 kubelet[2578]: E0117 00:16:11.620233 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.623332 kubelet[2578]: E0117 00:16:11.623101 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.623332 kubelet[2578]: W0117 00:16:11.623124 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.623332 kubelet[2578]: E0117 00:16:11.623143 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.623685 kubelet[2578]: E0117 00:16:11.623623 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.623685 kubelet[2578]: W0117 00:16:11.623641 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.623685 kubelet[2578]: E0117 00:16:11.623658 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.645529 containerd[1459]: time="2026-01-17T00:16:11.645121262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:11.645529 containerd[1459]: time="2026-01-17T00:16:11.645356071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:11.645529 containerd[1459]: time="2026-01-17T00:16:11.645385346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:11.647031 containerd[1459]: time="2026-01-17T00:16:11.646831637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:11.680278 systemd[1]: Started cri-containerd-e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23.scope - libcontainer container e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23. Jan 17 00:16:11.710656 containerd[1459]: time="2026-01-17T00:16:11.710506868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-csq5g,Uid:9b78d874-8d1b-43c4-ab06-286a3a669929,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\"" Jan 17 00:16:11.714301 kubelet[2578]: E0117 00:16:11.714191 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.714301 kubelet[2578]: W0117 00:16:11.714222 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.714301 kubelet[2578]: E0117 00:16:11.714290 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.715123 kubelet[2578]: E0117 00:16:11.715098 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.715123 kubelet[2578]: W0117 00:16:11.715119 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.715273 kubelet[2578]: E0117 00:16:11.715160 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.715656 kubelet[2578]: E0117 00:16:11.715623 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.715656 kubelet[2578]: W0117 00:16:11.715641 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.715801 kubelet[2578]: E0117 00:16:11.715660 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.716791 kubelet[2578]: E0117 00:16:11.716176 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.716791 kubelet[2578]: W0117 00:16:11.716209 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.716791 kubelet[2578]: E0117 00:16:11.716227 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.716997 kubelet[2578]: E0117 00:16:11.716856 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.716997 kubelet[2578]: W0117 00:16:11.716871 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.717225 kubelet[2578]: E0117 00:16:11.716888 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.717631 kubelet[2578]: E0117 00:16:11.717598 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.717631 kubelet[2578]: W0117 00:16:11.717617 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.717774 kubelet[2578]: E0117 00:16:11.717635 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.718129 kubelet[2578]: E0117 00:16:11.718094 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.718129 kubelet[2578]: W0117 00:16:11.718118 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.718274 kubelet[2578]: E0117 00:16:11.718135 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.718549 kubelet[2578]: E0117 00:16:11.718514 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.718629 kubelet[2578]: W0117 00:16:11.718531 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.718629 kubelet[2578]: E0117 00:16:11.718573 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.718990 kubelet[2578]: E0117 00:16:11.718958 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.718990 kubelet[2578]: W0117 00:16:11.718976 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.719142 kubelet[2578]: E0117 00:16:11.718994 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.719438 kubelet[2578]: E0117 00:16:11.719396 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.719438 kubelet[2578]: W0117 00:16:11.719424 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.719559 kubelet[2578]: E0117 00:16:11.719440 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.719906 kubelet[2578]: E0117 00:16:11.719863 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.719906 kubelet[2578]: W0117 00:16:11.719893 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.720029 kubelet[2578]: E0117 00:16:11.719910 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.720341 kubelet[2578]: E0117 00:16:11.720305 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.720341 kubelet[2578]: W0117 00:16:11.720324 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.720341 kubelet[2578]: E0117 00:16:11.720341 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.720766 kubelet[2578]: E0117 00:16:11.720734 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.720766 kubelet[2578]: W0117 00:16:11.720752 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.720914 kubelet[2578]: E0117 00:16:11.720769 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.721228 kubelet[2578]: E0117 00:16:11.721191 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.721326 kubelet[2578]: W0117 00:16:11.721228 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.721326 kubelet[2578]: E0117 00:16:11.721246 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.721652 kubelet[2578]: E0117 00:16:11.721617 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.721652 kubelet[2578]: W0117 00:16:11.721633 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.721769 kubelet[2578]: E0117 00:16:11.721671 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.722119 kubelet[2578]: E0117 00:16:11.722098 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.722119 kubelet[2578]: W0117 00:16:11.722115 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.722276 kubelet[2578]: E0117 00:16:11.722131 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.722618 kubelet[2578]: E0117 00:16:11.722583 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.722618 kubelet[2578]: W0117 00:16:11.722600 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.722618 kubelet[2578]: E0117 00:16:11.722617 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.723188 kubelet[2578]: E0117 00:16:11.723161 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.723188 kubelet[2578]: W0117 00:16:11.723180 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.723441 kubelet[2578]: E0117 00:16:11.723215 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.723659 kubelet[2578]: E0117 00:16:11.723638 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.723659 kubelet[2578]: W0117 00:16:11.723656 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.723795 kubelet[2578]: E0117 00:16:11.723673 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.724158 kubelet[2578]: E0117 00:16:11.724128 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.724158 kubelet[2578]: W0117 00:16:11.724146 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.724327 kubelet[2578]: E0117 00:16:11.724163 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.724668 kubelet[2578]: E0117 00:16:11.724647 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.724668 kubelet[2578]: W0117 00:16:11.724665 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.724796 kubelet[2578]: E0117 00:16:11.724683 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.725124 kubelet[2578]: E0117 00:16:11.725102 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.725124 kubelet[2578]: W0117 00:16:11.725120 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.725265 kubelet[2578]: E0117 00:16:11.725137 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.725536 kubelet[2578]: E0117 00:16:11.725517 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.725622 kubelet[2578]: W0117 00:16:11.725560 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.725622 kubelet[2578]: E0117 00:16:11.725578 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.726010 kubelet[2578]: E0117 00:16:11.725990 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.726010 kubelet[2578]: W0117 00:16:11.726008 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.726160 kubelet[2578]: E0117 00:16:11.726024 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.726878 kubelet[2578]: E0117 00:16:11.726851 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.726878 kubelet[2578]: W0117 00:16:11.726875 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.727030 kubelet[2578]: E0117 00:16:11.726894 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:11.742471 kubelet[2578]: E0117 00:16:11.742152 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:11.742471 kubelet[2578]: W0117 00:16:11.742171 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:11.742471 kubelet[2578]: E0117 00:16:11.742207 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:12.100901 systemd[1]: run-containerd-runc-k8s.io-c912e5f4294168552c30cbf8b845cbaec6427cd47227691cf1179e9fad26886a-runc.1DE8Yr.mount: Deactivated successfully. Jan 17 00:16:12.536463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127590733.mount: Deactivated successfully. Jan 17 00:16:13.347668 kubelet[2578]: E0117 00:16:13.347604 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:13.904170 containerd[1459]: time="2026-01-17T00:16:13.903979822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:13.905788 containerd[1459]: time="2026-01-17T00:16:13.905695021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:16:13.908184 containerd[1459]: time="2026-01-17T00:16:13.908136243Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:13.912124 containerd[1459]: time="2026-01-17T00:16:13.912018999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:13.913885 containerd[1459]: time="2026-01-17T00:16:13.913831882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.370905911s" Jan 17 00:16:13.913885 containerd[1459]: time="2026-01-17T00:16:13.913869662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:16:13.918257 containerd[1459]: time="2026-01-17T00:16:13.918217348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:16:13.943231 containerd[1459]: time="2026-01-17T00:16:13.943190910Z" level=info msg="CreateContainer within sandbox \"c912e5f4294168552c30cbf8b845cbaec6427cd47227691cf1179e9fad26886a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:16:13.964801 containerd[1459]: time="2026-01-17T00:16:13.964671127Z" level=info msg="CreateContainer within sandbox \"c912e5f4294168552c30cbf8b845cbaec6427cd47227691cf1179e9fad26886a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ed2509259553ddef19d33d0a9598e1d16065ad9baa227d93923006a88bf2ed7\"" Jan 17 00:16:13.966608 containerd[1459]: time="2026-01-17T00:16:13.966478198Z" level=info msg="StartContainer for \"9ed2509259553ddef19d33d0a9598e1d16065ad9baa227d93923006a88bf2ed7\"" Jan 17 00:16:14.020248 systemd[1]: Started cri-containerd-9ed2509259553ddef19d33d0a9598e1d16065ad9baa227d93923006a88bf2ed7.scope - libcontainer container 9ed2509259553ddef19d33d0a9598e1d16065ad9baa227d93923006a88bf2ed7. Jan 17 00:16:14.084451 containerd[1459]: time="2026-01-17T00:16:14.084402423Z" level=info msg="StartContainer for \"9ed2509259553ddef19d33d0a9598e1d16065ad9baa227d93923006a88bf2ed7\" returns successfully" Jan 17 00:16:14.546352 kubelet[2578]: E0117 00:16:14.546301 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.546352 kubelet[2578]: W0117 00:16:14.546327 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.546352 kubelet[2578]: E0117 00:16:14.546353 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.547093 kubelet[2578]: E0117 00:16:14.546998 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.547093 kubelet[2578]: W0117 00:16:14.547015 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.547093 kubelet[2578]: E0117 00:16:14.547073 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.548162 kubelet[2578]: E0117 00:16:14.548130 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.548162 kubelet[2578]: W0117 00:16:14.548151 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.548673 kubelet[2578]: E0117 00:16:14.548170 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.548673 kubelet[2578]: E0117 00:16:14.548512 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.548673 kubelet[2578]: W0117 00:16:14.548526 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.548673 kubelet[2578]: E0117 00:16:14.548543 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.549869 kubelet[2578]: E0117 00:16:14.549182 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.549869 kubelet[2578]: W0117 00:16:14.549198 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.549869 kubelet[2578]: E0117 00:16:14.549215 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.550212 kubelet[2578]: E0117 00:16:14.550182 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.550212 kubelet[2578]: W0117 00:16:14.550200 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.551196 kubelet[2578]: E0117 00:16:14.551094 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.551469 kubelet[2578]: E0117 00:16:14.551452 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.551469 kubelet[2578]: W0117 00:16:14.551467 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.551592 kubelet[2578]: E0117 00:16:14.551485 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.551833 kubelet[2578]: E0117 00:16:14.551814 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.551905 kubelet[2578]: W0117 00:16:14.551834 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.551905 kubelet[2578]: E0117 00:16:14.551851 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.552227 kubelet[2578]: E0117 00:16:14.552206 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.552227 kubelet[2578]: W0117 00:16:14.552225 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.552381 kubelet[2578]: E0117 00:16:14.552242 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.552561 kubelet[2578]: E0117 00:16:14.552540 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.552561 kubelet[2578]: W0117 00:16:14.552560 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.552675 kubelet[2578]: E0117 00:16:14.552576 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.553139 kubelet[2578]: E0117 00:16:14.552883 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.553139 kubelet[2578]: W0117 00:16:14.552895 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.553139 kubelet[2578]: E0117 00:16:14.552909 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.553629 kubelet[2578]: E0117 00:16:14.553592 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.553629 kubelet[2578]: W0117 00:16:14.553610 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.553629 kubelet[2578]: E0117 00:16:14.553627 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.555085 kubelet[2578]: E0117 00:16:14.554136 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.555085 kubelet[2578]: W0117 00:16:14.554153 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.555085 kubelet[2578]: E0117 00:16:14.554170 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.555334 kubelet[2578]: E0117 00:16:14.555311 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.555396 kubelet[2578]: W0117 00:16:14.555335 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.555396 kubelet[2578]: E0117 00:16:14.555353 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.555689 kubelet[2578]: E0117 00:16:14.555665 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.555689 kubelet[2578]: W0117 00:16:14.555687 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.555806 kubelet[2578]: E0117 00:16:14.555703 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.640545 kubelet[2578]: E0117 00:16:14.640502 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.640545 kubelet[2578]: W0117 00:16:14.640538 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.640758 kubelet[2578]: E0117 00:16:14.640568 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.641134 kubelet[2578]: E0117 00:16:14.641101 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.641134 kubelet[2578]: W0117 00:16:14.641130 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.641287 kubelet[2578]: E0117 00:16:14.641154 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.642488 kubelet[2578]: E0117 00:16:14.642453 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.642488 kubelet[2578]: W0117 00:16:14.642483 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.642666 kubelet[2578]: E0117 00:16:14.642502 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.643414 kubelet[2578]: E0117 00:16:14.643389 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.643414 kubelet[2578]: W0117 00:16:14.643413 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.643581 kubelet[2578]: E0117 00:16:14.643433 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.644297 kubelet[2578]: E0117 00:16:14.644264 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.644297 kubelet[2578]: W0117 00:16:14.644295 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.644466 kubelet[2578]: E0117 00:16:14.644315 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.645431 kubelet[2578]: E0117 00:16:14.645405 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.645431 kubelet[2578]: W0117 00:16:14.645429 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.645568 kubelet[2578]: E0117 00:16:14.645447 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.646911 kubelet[2578]: E0117 00:16:14.646881 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.646911 kubelet[2578]: W0117 00:16:14.646908 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.647070 kubelet[2578]: E0117 00:16:14.646927 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.647366 kubelet[2578]: E0117 00:16:14.647339 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.647366 kubelet[2578]: W0117 00:16:14.647363 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.647516 kubelet[2578]: E0117 00:16:14.647381 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.648270 kubelet[2578]: E0117 00:16:14.648108 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.648270 kubelet[2578]: W0117 00:16:14.648131 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.648270 kubelet[2578]: E0117 00:16:14.648152 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.648658 kubelet[2578]: E0117 00:16:14.648509 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.648658 kubelet[2578]: W0117 00:16:14.648526 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.648658 kubelet[2578]: E0117 00:16:14.648543 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.650770 kubelet[2578]: E0117 00:16:14.649974 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.650770 kubelet[2578]: W0117 00:16:14.649996 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.650770 kubelet[2578]: E0117 00:16:14.650014 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.650975 kubelet[2578]: E0117 00:16:14.650820 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.650975 kubelet[2578]: W0117 00:16:14.650835 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.650975 kubelet[2578]: E0117 00:16:14.650852 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.651251 kubelet[2578]: E0117 00:16:14.651215 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.651251 kubelet[2578]: W0117 00:16:14.651238 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.651384 kubelet[2578]: E0117 00:16:14.651254 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.652683 kubelet[2578]: E0117 00:16:14.652567 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.652683 kubelet[2578]: W0117 00:16:14.652590 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.652683 kubelet[2578]: E0117 00:16:14.652607 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.653823 kubelet[2578]: E0117 00:16:14.653002 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.653823 kubelet[2578]: W0117 00:16:14.653017 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.653823 kubelet[2578]: E0117 00:16:14.653034 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.655126 kubelet[2578]: E0117 00:16:14.655095 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.655126 kubelet[2578]: W0117 00:16:14.655119 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.655287 kubelet[2578]: E0117 00:16:14.655137 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.656519 kubelet[2578]: E0117 00:16:14.656335 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.656519 kubelet[2578]: W0117 00:16:14.656514 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.656669 kubelet[2578]: E0117 00:16:14.656534 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.657284 kubelet[2578]: E0117 00:16:14.657240 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:14.657284 kubelet[2578]: W0117 00:16:14.657266 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:14.657284 kubelet[2578]: E0117 00:16:14.657285 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:14.893933 containerd[1459]: time="2026-01-17T00:16:14.892724375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:14.897442 containerd[1459]: time="2026-01-17T00:16:14.897372625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:16:14.899439 containerd[1459]: time="2026-01-17T00:16:14.899279646Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:14.903784 containerd[1459]: time="2026-01-17T00:16:14.903726569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:14.904752 containerd[1459]: time="2026-01-17T00:16:14.904591473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 986.325559ms" Jan 17 00:16:14.904752 containerd[1459]: time="2026-01-17T00:16:14.904639858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:16:14.909927 containerd[1459]: time="2026-01-17T00:16:14.909817821Z" level=info msg="CreateContainer within sandbox \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:16:14.932207 containerd[1459]: time="2026-01-17T00:16:14.932156579Z" level=info msg="CreateContainer within sandbox \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274\"" Jan 17 00:16:14.933085 containerd[1459]: time="2026-01-17T00:16:14.933001350Z" level=info msg="StartContainer for \"6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274\"" Jan 17 00:16:14.989254 systemd[1]: Started cri-containerd-6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274.scope - libcontainer container 6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274. Jan 17 00:16:15.038874 containerd[1459]: time="2026-01-17T00:16:15.038752213Z" level=info msg="StartContainer for \"6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274\" returns successfully" Jan 17 00:16:15.060687 systemd[1]: cri-containerd-6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274.scope: Deactivated successfully. Jan 17 00:16:15.102689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274-rootfs.mount: Deactivated successfully. Jan 17 00:16:15.346572 kubelet[2578]: E0117 00:16:15.346498 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:15.489511 kubelet[2578]: I0117 00:16:15.489435 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:16:15.509610 kubelet[2578]: I0117 00:16:15.509305 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bb455c45c-9rq4z" podStartSLOduration=3.134554093 podStartE2EDuration="5.509283202s" podCreationTimestamp="2026-01-17 00:16:10 +0000 UTC" firstStartedPulling="2026-01-17 00:16:11.541102347 +0000 UTC m=+23.373531028" lastFinishedPulling="2026-01-17 00:16:13.915831453 +0000 UTC m=+25.748260137" observedRunningTime="2026-01-17 00:16:14.53557862 +0000 UTC m=+26.368007359" watchObservedRunningTime="2026-01-17 00:16:15.509283202 +0000 UTC m=+27.341711908" Jan 17 00:16:15.779796 containerd[1459]: time="2026-01-17T00:16:15.779616479Z" level=info msg="shim disconnected" id=6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274 namespace=k8s.io Jan 17 00:16:15.779796 containerd[1459]: time="2026-01-17T00:16:15.779697980Z" level=warning msg="cleaning up after shim disconnected" id=6a713d61b38ca9e271571042f35157b4ee6829170fb3136051dd7530aba00274 namespace=k8s.io Jan 17 00:16:15.779796 containerd[1459]: time="2026-01-17T00:16:15.779711055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:16:16.496061 containerd[1459]: time="2026-01-17T00:16:16.495994076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:16:17.348076 kubelet[2578]: E0117 00:16:17.346628 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:18.034598 kubelet[2578]: I0117 00:16:18.034556 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:16:19.348184 kubelet[2578]: E0117 00:16:19.347168 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:19.806135 containerd[1459]: time="2026-01-17T00:16:19.806077271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:19.807555 containerd[1459]: time="2026-01-17T00:16:19.807396132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:16:19.810087 containerd[1459]: time="2026-01-17T00:16:19.808768644Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:19.811825 containerd[1459]: time="2026-01-17T00:16:19.811781910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:19.812901 containerd[1459]: time="2026-01-17T00:16:19.812859359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.316813419s" Jan 17 00:16:19.813107 containerd[1459]: time="2026-01-17T00:16:19.813078570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:16:19.818462 containerd[1459]: time="2026-01-17T00:16:19.818426310Z" level=info msg="CreateContainer within sandbox \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:16:19.836225 containerd[1459]: time="2026-01-17T00:16:19.836183362Z" level=info msg="CreateContainer within sandbox \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887\"" Jan 17 00:16:19.838068 containerd[1459]: time="2026-01-17T00:16:19.836798851Z" level=info msg="StartContainer for \"40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887\"" Jan 17 00:16:19.886233 systemd[1]: Started cri-containerd-40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887.scope - libcontainer container 40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887. Jan 17 00:16:19.938582 containerd[1459]: time="2026-01-17T00:16:19.938461229Z" level=info msg="StartContainer for \"40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887\" returns successfully" Jan 17 00:16:20.902536 containerd[1459]: time="2026-01-17T00:16:20.902473863Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:16:20.905727 systemd[1]: cri-containerd-40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887.scope: Deactivated successfully. Jan 17 00:16:20.942352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887-rootfs.mount: Deactivated successfully. Jan 17 00:16:20.969128 kubelet[2578]: I0117 00:16:20.969095 2578 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:16:21.207192 systemd[1]: Created slice kubepods-burstable-pod2ecb4038_1a10_453e_a0f6_362231e5785b.slice - libcontainer container kubepods-burstable-pod2ecb4038_1a10_453e_a0f6_362231e5785b.slice. Jan 17 00:16:21.289007 kubelet[2578]: I0117 00:16:21.288856 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ecb4038-1a10-453e-a0f6-362231e5785b-config-volume\") pod \"coredns-66bc5c9577-nfdnh\" (UID: \"2ecb4038-1a10-453e-a0f6-362231e5785b\") " pod="kube-system/coredns-66bc5c9577-nfdnh" Jan 17 00:16:21.289007 kubelet[2578]: I0117 00:16:21.288951 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsp44\" (UniqueName: \"kubernetes.io/projected/2ecb4038-1a10-453e-a0f6-362231e5785b-kube-api-access-qsp44\") pod \"coredns-66bc5c9577-nfdnh\" (UID: \"2ecb4038-1a10-453e-a0f6-362231e5785b\") " pod="kube-system/coredns-66bc5c9577-nfdnh" Jan 17 00:16:21.503092 containerd[1459]: time="2026-01-17T00:16:21.497401089Z" level=info msg="shim disconnected" id=40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887 namespace=k8s.io Jan 17 00:16:21.503092 containerd[1459]: time="2026-01-17T00:16:21.497472721Z" level=warning msg="cleaning up after shim disconnected" id=40437836eadd7027c0ca8c5516797fffcd72ca30d880b1edade2768023d55887 namespace=k8s.io Jan 17 00:16:21.503092 containerd[1459]: time="2026-01-17T00:16:21.497489117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:16:21.515246 systemd[1]: Created slice kubepods-besteffort-podc0487a21_dbe0_44a7_9d70_ec67d89290e6.slice - libcontainer container kubepods-besteffort-podc0487a21_dbe0_44a7_9d70_ec67d89290e6.slice. Jan 17 00:16:21.527251 containerd[1459]: time="2026-01-17T00:16:21.527104375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfdnh,Uid:2ecb4038-1a10-453e-a0f6-362231e5785b,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:21.543800 systemd[1]: Created slice kubepods-besteffort-pod0c0f8dfe_d5e5_4727_b5d0_0b1225ee5624.slice - libcontainer container kubepods-besteffort-pod0c0f8dfe_d5e5_4727_b5d0_0b1225ee5624.slice. Jan 17 00:16:21.562195 containerd[1459]: time="2026-01-17T00:16:21.561917425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49lv6,Uid:0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:21.591259 systemd[1]: Created slice kubepods-burstable-podd801c454_90d8_47bb_9464_b452b91cd3db.slice - libcontainer container kubepods-burstable-podd801c454_90d8_47bb_9464_b452b91cd3db.slice. Jan 17 00:16:21.595718 kubelet[2578]: I0117 00:16:21.593976 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4dd5910c-d46c-4829-af81-73c3a3c07bf1-tigera-ca-bundle\") pod \"calico-kube-controllers-5b8d4cfc64-6pg6z\" (UID: \"4dd5910c-d46c-4829-af81-73c3a3c07bf1\") " pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" Jan 17 00:16:21.595718 kubelet[2578]: I0117 00:16:21.594025 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d801c454-90d8-47bb-9464-b452b91cd3db-config-volume\") pod \"coredns-66bc5c9577-qwwf7\" (UID: \"d801c454-90d8-47bb-9464-b452b91cd3db\") " pod="kube-system/coredns-66bc5c9577-qwwf7" Jan 17 00:16:21.595718 kubelet[2578]: I0117 00:16:21.594233 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdc6t\" (UniqueName: \"kubernetes.io/projected/c0487a21-dbe0-44a7-9d70-ec67d89290e6-kube-api-access-fdc6t\") pod \"whisker-7f7864c4bd-h6vsc\" (UID: \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\") " pod="calico-system/whisker-7f7864c4bd-h6vsc" Jan 17 00:16:21.595718 kubelet[2578]: I0117 00:16:21.594281 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-ca-bundle\") pod \"whisker-7f7864c4bd-h6vsc\" (UID: \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\") " pod="calico-system/whisker-7f7864c4bd-h6vsc" Jan 17 00:16:21.595718 kubelet[2578]: I0117 00:16:21.595151 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rvh\" (UniqueName: \"kubernetes.io/projected/d801c454-90d8-47bb-9464-b452b91cd3db-kube-api-access-b2rvh\") pod \"coredns-66bc5c9577-qwwf7\" (UID: \"d801c454-90d8-47bb-9464-b452b91cd3db\") " pod="kube-system/coredns-66bc5c9577-qwwf7" Jan 17 00:16:21.597248 kubelet[2578]: I0117 00:16:21.595559 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4qgg\" (UniqueName: \"kubernetes.io/projected/4dd5910c-d46c-4829-af81-73c3a3c07bf1-kube-api-access-x4qgg\") pod \"calico-kube-controllers-5b8d4cfc64-6pg6z\" (UID: \"4dd5910c-d46c-4829-af81-73c3a3c07bf1\") " pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" Jan 17 00:16:21.597248 kubelet[2578]: I0117 00:16:21.595606 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-backend-key-pair\") pod \"whisker-7f7864c4bd-h6vsc\" (UID: \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\") " pod="calico-system/whisker-7f7864c4bd-h6vsc" Jan 17 00:16:21.626964 systemd[1]: Created slice kubepods-besteffort-pod4dd5910c_d46c_4829_af81_73c3a3c07bf1.slice - libcontainer container kubepods-besteffort-pod4dd5910c_d46c_4829_af81_73c3a3c07bf1.slice. Jan 17 00:16:21.648894 systemd[1]: Created slice kubepods-besteffort-pode768df9c_0c67_442b_b814_3828e727eb5c.slice - libcontainer container kubepods-besteffort-pode768df9c_0c67_442b_b814_3828e727eb5c.slice. Jan 17 00:16:21.666566 systemd[1]: Created slice kubepods-besteffort-poded571de0_820f_44a5_8d65_cc57b2d7af22.slice - libcontainer container kubepods-besteffort-poded571de0_820f_44a5_8d65_cc57b2d7af22.slice. Jan 17 00:16:21.690571 systemd[1]: Created slice kubepods-besteffort-pod53c5293b_6a33_4d3c_b982_707b2d5a0fd8.slice - libcontainer container kubepods-besteffort-pod53c5293b_6a33_4d3c_b982_707b2d5a0fd8.slice. Jan 17 00:16:21.697070 kubelet[2578]: I0117 00:16:21.696242 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed571de0-820f-44a5-8d65-cc57b2d7af22-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-zts27\" (UID: \"ed571de0-820f-44a5-8d65-cc57b2d7af22\") " pod="calico-system/goldmane-7c778bb748-zts27" Jan 17 00:16:21.697070 kubelet[2578]: I0117 00:16:21.696290 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ed571de0-820f-44a5-8d65-cc57b2d7af22-goldmane-key-pair\") pod \"goldmane-7c778bb748-zts27\" (UID: \"ed571de0-820f-44a5-8d65-cc57b2d7af22\") " pod="calico-system/goldmane-7c778bb748-zts27" Jan 17 00:16:21.697070 kubelet[2578]: I0117 00:16:21.696321 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed571de0-820f-44a5-8d65-cc57b2d7af22-config\") pod \"goldmane-7c778bb748-zts27\" (UID: \"ed571de0-820f-44a5-8d65-cc57b2d7af22\") " pod="calico-system/goldmane-7c778bb748-zts27" Jan 17 00:16:21.697070 kubelet[2578]: I0117 00:16:21.696395 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e768df9c-0c67-442b-b814-3828e727eb5c-calico-apiserver-certs\") pod \"calico-apiserver-85bf985ffc-rd5bl\" (UID: \"e768df9c-0c67-442b-b814-3828e727eb5c\") " pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" Jan 17 00:16:21.697070 kubelet[2578]: I0117 00:16:21.696452 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcx27\" (UniqueName: \"kubernetes.io/projected/e768df9c-0c67-442b-b814-3828e727eb5c-kube-api-access-wcx27\") pod \"calico-apiserver-85bf985ffc-rd5bl\" (UID: \"e768df9c-0c67-442b-b814-3828e727eb5c\") " pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" Jan 17 00:16:21.697418 kubelet[2578]: I0117 00:16:21.696497 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7c4r\" (UniqueName: \"kubernetes.io/projected/ed571de0-820f-44a5-8d65-cc57b2d7af22-kube-api-access-f7c4r\") pod \"goldmane-7c778bb748-zts27\" (UID: \"ed571de0-820f-44a5-8d65-cc57b2d7af22\") " pod="calico-system/goldmane-7c778bb748-zts27" Jan 17 00:16:21.697418 kubelet[2578]: I0117 00:16:21.696604 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53c5293b-6a33-4d3c-b982-707b2d5a0fd8-calico-apiserver-certs\") pod \"calico-apiserver-85bf985ffc-kdf8q\" (UID: \"53c5293b-6a33-4d3c-b982-707b2d5a0fd8\") " pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" Jan 17 00:16:21.697418 kubelet[2578]: I0117 00:16:21.696632 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlbm\" (UniqueName: \"kubernetes.io/projected/53c5293b-6a33-4d3c-b982-707b2d5a0fd8-kube-api-access-8nlbm\") pod \"calico-apiserver-85bf985ffc-kdf8q\" (UID: \"53c5293b-6a33-4d3c-b982-707b2d5a0fd8\") " pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" Jan 17 00:16:21.764153 containerd[1459]: time="2026-01-17T00:16:21.763966371Z" level=error msg="Failed to destroy network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.764757 containerd[1459]: time="2026-01-17T00:16:21.764704222Z" level=error msg="encountered an error cleaning up failed sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.764878 containerd[1459]: time="2026-01-17T00:16:21.764799819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49lv6,Uid:0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.766111 kubelet[2578]: E0117 00:16:21.765033 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.766111 kubelet[2578]: E0117 00:16:21.765181 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:21.766111 kubelet[2578]: E0117 00:16:21.765213 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-49lv6" Jan 17 00:16:21.766345 kubelet[2578]: E0117 00:16:21.765292 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:21.813085 containerd[1459]: time="2026-01-17T00:16:21.811175139Z" level=error msg="Failed to destroy network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.813085 containerd[1459]: time="2026-01-17T00:16:21.811576623Z" level=error msg="encountered an error cleaning up failed sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.813085 containerd[1459]: time="2026-01-17T00:16:21.811657437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfdnh,Uid:2ecb4038-1a10-453e-a0f6-362231e5785b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.813350 kubelet[2578]: E0117 00:16:21.811927 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.813350 kubelet[2578]: E0117 00:16:21.811978 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nfdnh" Jan 17 00:16:21.813350 kubelet[2578]: E0117 00:16:21.812009 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nfdnh" Jan 17 00:16:21.813523 kubelet[2578]: E0117 00:16:21.812672 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nfdnh_kube-system(2ecb4038-1a10-453e-a0f6-362231e5785b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nfdnh_kube-system(2ecb4038-1a10-453e-a0f6-362231e5785b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nfdnh" podUID="2ecb4038-1a10-453e-a0f6-362231e5785b" Jan 17 00:16:21.833437 containerd[1459]: time="2026-01-17T00:16:21.833393759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7864c4bd-h6vsc,Uid:c0487a21-dbe0-44a7-9d70-ec67d89290e6,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:21.903962 containerd[1459]: time="2026-01-17T00:16:21.903472428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwwf7,Uid:d801c454-90d8-47bb-9464-b452b91cd3db,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:21.913637 containerd[1459]: time="2026-01-17T00:16:21.913586642Z" level=error msg="Failed to destroy network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.914082 containerd[1459]: time="2026-01-17T00:16:21.913997404Z" level=error msg="encountered an error cleaning up failed sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.914252 containerd[1459]: time="2026-01-17T00:16:21.914104035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7864c4bd-h6vsc,Uid:c0487a21-dbe0-44a7-9d70-ec67d89290e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.914604 kubelet[2578]: E0117 00:16:21.914326 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:21.914604 kubelet[2578]: E0117 00:16:21.914385 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f7864c4bd-h6vsc" Jan 17 00:16:21.914604 kubelet[2578]: E0117 00:16:21.914417 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f7864c4bd-h6vsc" Jan 17 00:16:21.914827 kubelet[2578]: E0117 00:16:21.914494 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f7864c4bd-h6vsc_calico-system(c0487a21-dbe0-44a7-9d70-ec67d89290e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f7864c4bd-h6vsc_calico-system(c0487a21-dbe0-44a7-9d70-ec67d89290e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f7864c4bd-h6vsc" podUID="c0487a21-dbe0-44a7-9d70-ec67d89290e6" Jan 17 00:16:21.951168 containerd[1459]: time="2026-01-17T00:16:21.948850863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8d4cfc64-6pg6z,Uid:4dd5910c-d46c-4829-af81-73c3a3c07bf1,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:21.968483 containerd[1459]: time="2026-01-17T00:16:21.965919463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-rd5bl,Uid:e768df9c-0c67-442b-b814-3828e727eb5c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:16:21.989066 containerd[1459]: time="2026-01-17T00:16:21.989006079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zts27,Uid:ed571de0-820f-44a5-8d65-cc57b2d7af22,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:22.002380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe-shm.mount: Deactivated successfully. Jan 17 00:16:22.003144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716-shm.mount: Deactivated successfully. Jan 17 00:16:22.005072 containerd[1459]: time="2026-01-17T00:16:22.003409703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-kdf8q,Uid:53c5293b-6a33-4d3c-b982-707b2d5a0fd8,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:16:22.143861 containerd[1459]: time="2026-01-17T00:16:22.143744714Z" level=error msg="Failed to destroy network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.145404 containerd[1459]: time="2026-01-17T00:16:22.145295283Z" level=error msg="encountered an error cleaning up failed sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.145404 containerd[1459]: time="2026-01-17T00:16:22.145378580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwwf7,Uid:d801c454-90d8-47bb-9464-b452b91cd3db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.145803 kubelet[2578]: E0117 00:16:22.145662 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.145803 kubelet[2578]: E0117 00:16:22.145756 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qwwf7" Jan 17 00:16:22.145803 kubelet[2578]: E0117 00:16:22.145787 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qwwf7" Jan 17 00:16:22.146690 kubelet[2578]: E0117 00:16:22.145862 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qwwf7_kube-system(d801c454-90d8-47bb-9464-b452b91cd3db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qwwf7_kube-system(d801c454-90d8-47bb-9464-b452b91cd3db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qwwf7" podUID="d801c454-90d8-47bb-9464-b452b91cd3db" Jan 17 00:16:22.246542 containerd[1459]: time="2026-01-17T00:16:22.246404344Z" level=error msg="Failed to destroy network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.247026 containerd[1459]: time="2026-01-17T00:16:22.246930827Z" level=error msg="encountered an error cleaning up failed sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.247026 containerd[1459]: time="2026-01-17T00:16:22.247011312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8d4cfc64-6pg6z,Uid:4dd5910c-d46c-4829-af81-73c3a3c07bf1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.247485 kubelet[2578]: E0117 00:16:22.247342 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.247485 kubelet[2578]: E0117 00:16:22.247413 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" Jan 17 00:16:22.247485 kubelet[2578]: E0117 00:16:22.247452 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" Jan 17 00:16:22.247800 kubelet[2578]: E0117 00:16:22.247525 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b8d4cfc64-6pg6z_calico-system(4dd5910c-d46c-4829-af81-73c3a3c07bf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b8d4cfc64-6pg6z_calico-system(4dd5910c-d46c-4829-af81-73c3a3c07bf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:16:22.271245 containerd[1459]: time="2026-01-17T00:16:22.270830467Z" level=error msg="Failed to destroy network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.273369 containerd[1459]: time="2026-01-17T00:16:22.273323547Z" level=error msg="encountered an error cleaning up failed sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.274124 containerd[1459]: time="2026-01-17T00:16:22.274082155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-kdf8q,Uid:53c5293b-6a33-4d3c-b982-707b2d5a0fd8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.275232 kubelet[2578]: E0117 00:16:22.274686 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.275232 kubelet[2578]: E0117 00:16:22.274764 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" Jan 17 00:16:22.275232 kubelet[2578]: E0117 00:16:22.274794 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" Jan 17 00:16:22.275478 kubelet[2578]: E0117 00:16:22.274879 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85bf985ffc-kdf8q_calico-apiserver(53c5293b-6a33-4d3c-b982-707b2d5a0fd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85bf985ffc-kdf8q_calico-apiserver(53c5293b-6a33-4d3c-b982-707b2d5a0fd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:16:22.286276 containerd[1459]: time="2026-01-17T00:16:22.286235702Z" level=error msg="Failed to destroy network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.286699 containerd[1459]: time="2026-01-17T00:16:22.286640448Z" level=error msg="encountered an error cleaning up failed sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.286795 containerd[1459]: time="2026-01-17T00:16:22.286710676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zts27,Uid:ed571de0-820f-44a5-8d65-cc57b2d7af22,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.286900 containerd[1459]: time="2026-01-17T00:16:22.286849696Z" level=error msg="Failed to destroy network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.287458 containerd[1459]: time="2026-01-17T00:16:22.287413595Z" level=error msg="encountered an error cleaning up failed sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.287573 containerd[1459]: time="2026-01-17T00:16:22.287480349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-rd5bl,Uid:e768df9c-0c67-442b-b814-3828e727eb5c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.287686 kubelet[2578]: E0117 00:16:22.287201 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.287686 kubelet[2578]: E0117 00:16:22.287608 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zts27" Jan 17 00:16:22.287686 kubelet[2578]: E0117 00:16:22.287636 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zts27" Jan 17 00:16:22.287853 kubelet[2578]: E0117 00:16:22.287724 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zts27_calico-system(ed571de0-820f-44a5-8d65-cc57b2d7af22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zts27_calico-system(ed571de0-820f-44a5-8d65-cc57b2d7af22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:16:22.288267 kubelet[2578]: E0117 00:16:22.288204 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.288267 kubelet[2578]: E0117 00:16:22.288257 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" Jan 17 00:16:22.288856 kubelet[2578]: E0117 00:16:22.288286 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" Jan 17 00:16:22.288856 kubelet[2578]: E0117 00:16:22.288366 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85bf985ffc-rd5bl_calico-apiserver(e768df9c-0c67-442b-b814-3828e727eb5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85bf985ffc-rd5bl_calico-apiserver(e768df9c-0c67-442b-b814-3828e727eb5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:16:22.538645 kubelet[2578]: I0117 00:16:22.538583 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:22.539597 containerd[1459]: time="2026-01-17T00:16:22.539536765Z" level=info msg="StopPodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\"" Jan 17 00:16:22.540219 containerd[1459]: time="2026-01-17T00:16:22.539819513Z" level=info msg="Ensure that sandbox 80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6 in task-service has been cleanup successfully" Jan 17 00:16:22.541599 containerd[1459]: time="2026-01-17T00:16:22.541554424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:16:22.548619 kubelet[2578]: I0117 00:16:22.548552 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:22.550531 containerd[1459]: time="2026-01-17T00:16:22.549168354Z" level=info msg="StopPodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\"" Jan 17 00:16:22.550531 containerd[1459]: time="2026-01-17T00:16:22.549401121Z" level=info msg="Ensure that sandbox d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba in task-service has been cleanup successfully" Jan 17 00:16:22.558075 kubelet[2578]: I0117 00:16:22.557226 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:22.562185 containerd[1459]: time="2026-01-17T00:16:22.562141863Z" level=info msg="StopPodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\"" Jan 17 00:16:22.563958 containerd[1459]: time="2026-01-17T00:16:22.563924380Z" level=info msg="Ensure that sandbox ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9 in task-service has been cleanup successfully" Jan 17 00:16:22.568099 kubelet[2578]: I0117 00:16:22.565418 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:22.569239 containerd[1459]: time="2026-01-17T00:16:22.569200251Z" level=info msg="StopPodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\"" Jan 17 00:16:22.569472 containerd[1459]: time="2026-01-17T00:16:22.569438314Z" level=info msg="Ensure that sandbox 300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a in task-service has been cleanup successfully" Jan 17 00:16:22.581373 kubelet[2578]: I0117 00:16:22.581283 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:22.585436 containerd[1459]: time="2026-01-17T00:16:22.584989460Z" level=info msg="StopPodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\"" Jan 17 00:16:22.593774 containerd[1459]: time="2026-01-17T00:16:22.593440545Z" level=info msg="Ensure that sandbox 0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716 in task-service has been cleanup successfully" Jan 17 00:16:22.596937 kubelet[2578]: I0117 00:16:22.596904 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:22.598170 containerd[1459]: time="2026-01-17T00:16:22.598131663Z" level=info msg="StopPodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\"" Jan 17 00:16:22.599167 containerd[1459]: time="2026-01-17T00:16:22.598618130Z" level=info msg="Ensure that sandbox a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549 in task-service has been cleanup successfully" Jan 17 00:16:22.607372 kubelet[2578]: I0117 00:16:22.607340 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:22.611151 containerd[1459]: time="2026-01-17T00:16:22.610815112Z" level=info msg="StopPodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\"" Jan 17 00:16:22.612527 containerd[1459]: time="2026-01-17T00:16:22.612473098Z" level=info msg="Ensure that sandbox 1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306 in task-service has been cleanup successfully" Jan 17 00:16:22.637883 kubelet[2578]: I0117 00:16:22.637770 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:22.640238 containerd[1459]: time="2026-01-17T00:16:22.640134744Z" level=info msg="StopPodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\"" Jan 17 00:16:22.640637 containerd[1459]: time="2026-01-17T00:16:22.640437984Z" level=info msg="Ensure that sandbox f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe in task-service has been cleanup successfully" Jan 17 00:16:22.707741 containerd[1459]: time="2026-01-17T00:16:22.707491824Z" level=error msg="StopPodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" failed" error="failed to destroy network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.714009 kubelet[2578]: E0117 00:16:22.713962 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:22.714183 kubelet[2578]: E0117 00:16:22.714029 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6"} Jan 17 00:16:22.714183 kubelet[2578]: E0117 00:16:22.714115 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed571de0-820f-44a5-8d65-cc57b2d7af22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.714183 kubelet[2578]: E0117 00:16:22.714155 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed571de0-820f-44a5-8d65-cc57b2d7af22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:16:22.755290 containerd[1459]: time="2026-01-17T00:16:22.755232159Z" level=error msg="StopPodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" failed" error="failed to destroy network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.755667 containerd[1459]: time="2026-01-17T00:16:22.755624226Z" level=error msg="StopPodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" failed" error="failed to destroy network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.756038 kubelet[2578]: E0117 00:16:22.755985 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:22.756038 kubelet[2578]: E0117 00:16:22.756073 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba"} Jan 17 00:16:22.756274 kubelet[2578]: E0117 00:16:22.756121 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4dd5910c-d46c-4829-af81-73c3a3c07bf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.756274 kubelet[2578]: E0117 00:16:22.756167 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4dd5910c-d46c-4829-af81-73c3a3c07bf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:16:22.756274 kubelet[2578]: E0117 00:16:22.755985 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:22.756274 kubelet[2578]: E0117 00:16:22.756209 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9"} Jan 17 00:16:22.756625 kubelet[2578]: E0117 00:16:22.756235 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e768df9c-0c67-442b-b814-3828e727eb5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.756625 kubelet[2578]: E0117 00:16:22.756265 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e768df9c-0c67-442b-b814-3828e727eb5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:16:22.790995 containerd[1459]: time="2026-01-17T00:16:22.789697625Z" level=error msg="StopPodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" failed" error="failed to destroy network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.791163 kubelet[2578]: E0117 00:16:22.789993 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:22.791163 kubelet[2578]: E0117 00:16:22.790085 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a"} Jan 17 00:16:22.791163 kubelet[2578]: E0117 00:16:22.790134 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.791163 kubelet[2578]: E0117 00:16:22.790178 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f7864c4bd-h6vsc" podUID="c0487a21-dbe0-44a7-9d70-ec67d89290e6" Jan 17 00:16:22.791667 containerd[1459]: time="2026-01-17T00:16:22.791622810Z" level=error msg="StopPodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" failed" error="failed to destroy network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.792290 kubelet[2578]: E0117 00:16:22.792025 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:22.792290 kubelet[2578]: E0117 00:16:22.792124 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549"} Jan 17 00:16:22.792290 kubelet[2578]: E0117 00:16:22.792194 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d801c454-90d8-47bb-9464-b452b91cd3db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.792290 kubelet[2578]: E0117 00:16:22.792252 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d801c454-90d8-47bb-9464-b452b91cd3db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qwwf7" podUID="d801c454-90d8-47bb-9464-b452b91cd3db" Jan 17 00:16:22.797911 containerd[1459]: time="2026-01-17T00:16:22.797852535Z" level=error msg="StopPodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" failed" error="failed to destroy network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.798579 containerd[1459]: time="2026-01-17T00:16:22.798102386Z" level=error msg="StopPodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" failed" error="failed to destroy network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.798667 kubelet[2578]: E0117 00:16:22.798104 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:22.798667 kubelet[2578]: E0117 00:16:22.798162 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716"} Jan 17 00:16:22.798667 kubelet[2578]: E0117 00:16:22.798281 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:22.798667 kubelet[2578]: E0117 00:16:22.798314 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306"} Jan 17 00:16:22.798667 kubelet[2578]: E0117 00:16:22.798346 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53c5293b-6a33-4d3c-b982-707b2d5a0fd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.798991 kubelet[2578]: E0117 00:16:22.798386 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53c5293b-6a33-4d3c-b982-707b2d5a0fd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:16:22.798991 kubelet[2578]: E0117 00:16:22.798201 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ecb4038-1a10-453e-a0f6-362231e5785b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.798991 kubelet[2578]: E0117 00:16:22.798462 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ecb4038-1a10-453e-a0f6-362231e5785b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nfdnh" podUID="2ecb4038-1a10-453e-a0f6-362231e5785b" Jan 17 00:16:22.801786 containerd[1459]: time="2026-01-17T00:16:22.801707206Z" level=error msg="StopPodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" failed" error="failed to destroy network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:22.801933 kubelet[2578]: E0117 00:16:22.801904 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:22.802110 kubelet[2578]: E0117 00:16:22.801945 2578 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe"} Jan 17 00:16:22.802110 kubelet[2578]: E0117 00:16:22.801981 2578 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:22.802110 kubelet[2578]: E0117 00:16:22.802020 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:22.939899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306-shm.mount: Deactivated successfully. Jan 17 00:16:22.940079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6-shm.mount: Deactivated successfully. Jan 17 00:16:22.940191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9-shm.mount: Deactivated successfully. Jan 17 00:16:22.940299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba-shm.mount: Deactivated successfully. Jan 17 00:16:22.940411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549-shm.mount: Deactivated successfully. Jan 17 00:16:29.593015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195391005.mount: Deactivated successfully. Jan 17 00:16:29.630956 containerd[1459]: time="2026-01-17T00:16:29.629907568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:29.631952 containerd[1459]: time="2026-01-17T00:16:29.631893357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:16:29.633444 containerd[1459]: time="2026-01-17T00:16:29.633407449Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:29.636835 containerd[1459]: time="2026-01-17T00:16:29.636797441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:29.638118 containerd[1459]: time="2026-01-17T00:16:29.637700328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.09424292s" Jan 17 00:16:29.638293 containerd[1459]: time="2026-01-17T00:16:29.638265830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:16:29.665385 containerd[1459]: time="2026-01-17T00:16:29.665347700Z" level=info msg="CreateContainer within sandbox \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:16:29.687540 containerd[1459]: time="2026-01-17T00:16:29.687480850Z" level=info msg="CreateContainer within sandbox \"e1159c95d3c31d3eeaac2c90e4f735a8096aac3e12d1a7c05231d56f429d7f23\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca\"" Jan 17 00:16:29.691070 containerd[1459]: time="2026-01-17T00:16:29.689144946Z" level=info msg="StartContainer for \"bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca\"" Jan 17 00:16:29.728240 systemd[1]: Started cri-containerd-bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca.scope - libcontainer container bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca. Jan 17 00:16:29.770442 containerd[1459]: time="2026-01-17T00:16:29.770395379Z" level=info msg="StartContainer for \"bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca\" returns successfully" Jan 17 00:16:29.902519 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:16:29.902667 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:16:30.036080 containerd[1459]: time="2026-01-17T00:16:30.035531609Z" level=info msg="StopPodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\"" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.155 [INFO][3808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.155 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" iface="eth0" netns="/var/run/netns/cni-55a17b83-2fb4-8447-1dd6-8996e142811c" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.156 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" iface="eth0" netns="/var/run/netns/cni-55a17b83-2fb4-8447-1dd6-8996e142811c" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.157 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" iface="eth0" netns="/var/run/netns/cni-55a17b83-2fb4-8447-1dd6-8996e142811c" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.157 [INFO][3808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.157 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.206 [INFO][3819] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.206 [INFO][3819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.206 [INFO][3819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.216 [WARNING][3819] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.216 [INFO][3819] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.218 [INFO][3819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:30.225991 containerd[1459]: 2026-01-17 00:16:30.223 [INFO][3808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:30.227316 containerd[1459]: time="2026-01-17T00:16:30.227020738Z" level=info msg="TearDown network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" successfully" Jan 17 00:16:30.227316 containerd[1459]: time="2026-01-17T00:16:30.227105062Z" level=info msg="StopPodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" returns successfully" Jan 17 00:16:30.369112 kubelet[2578]: I0117 00:16:30.367796 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdc6t\" (UniqueName: \"kubernetes.io/projected/c0487a21-dbe0-44a7-9d70-ec67d89290e6-kube-api-access-fdc6t\") pod \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\" (UID: \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\") " Jan 17 00:16:30.369112 kubelet[2578]: I0117 00:16:30.367884 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-backend-key-pair\") pod \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\" (UID: \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\") " Jan 17 00:16:30.369112 kubelet[2578]: I0117 00:16:30.367916 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-ca-bundle\") pod \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\" (UID: \"c0487a21-dbe0-44a7-9d70-ec67d89290e6\") " Jan 17 00:16:30.369112 kubelet[2578]: I0117 00:16:30.368449 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c0487a21-dbe0-44a7-9d70-ec67d89290e6" (UID: "c0487a21-dbe0-44a7-9d70-ec67d89290e6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:16:30.376705 kubelet[2578]: I0117 00:16:30.376640 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0487a21-dbe0-44a7-9d70-ec67d89290e6-kube-api-access-fdc6t" (OuterVolumeSpecName: "kube-api-access-fdc6t") pod "c0487a21-dbe0-44a7-9d70-ec67d89290e6" (UID: "c0487a21-dbe0-44a7-9d70-ec67d89290e6"). InnerVolumeSpecName "kube-api-access-fdc6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:16:30.377208 kubelet[2578]: I0117 00:16:30.377147 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c0487a21-dbe0-44a7-9d70-ec67d89290e6" (UID: "c0487a21-dbe0-44a7-9d70-ec67d89290e6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:16:30.469132 kubelet[2578]: I0117 00:16:30.469073 2578 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" DevicePath \"\"" Jan 17 00:16:30.469132 kubelet[2578]: I0117 00:16:30.469132 2578 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0487a21-dbe0-44a7-9d70-ec67d89290e6-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" DevicePath \"\"" Jan 17 00:16:30.469340 kubelet[2578]: I0117 00:16:30.469151 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdc6t\" (UniqueName: \"kubernetes.io/projected/c0487a21-dbe0-44a7-9d70-ec67d89290e6-kube-api-access-fdc6t\") on node \"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3\" DevicePath \"\"" Jan 17 00:16:30.591903 systemd[1]: run-netns-cni\x2d55a17b83\x2d2fb4\x2d8447\x2d1dd6\x2d8996e142811c.mount: Deactivated successfully. Jan 17 00:16:30.592073 systemd[1]: var-lib-kubelet-pods-c0487a21\x2ddbe0\x2d44a7\x2d9d70\x2dec67d89290e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfdc6t.mount: Deactivated successfully. Jan 17 00:16:30.592215 systemd[1]: var-lib-kubelet-pods-c0487a21\x2ddbe0\x2d44a7\x2d9d70\x2dec67d89290e6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:16:30.687781 systemd[1]: Removed slice kubepods-besteffort-podc0487a21_dbe0_44a7_9d70_ec67d89290e6.slice - libcontainer container kubepods-besteffort-podc0487a21_dbe0_44a7_9d70_ec67d89290e6.slice. Jan 17 00:16:30.715654 kubelet[2578]: I0117 00:16:30.714543 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-csq5g" podStartSLOduration=1.7871996129999999 podStartE2EDuration="19.71451919s" podCreationTimestamp="2026-01-17 00:16:11 +0000 UTC" firstStartedPulling="2026-01-17 00:16:11.712423227 +0000 UTC m=+23.544851914" lastFinishedPulling="2026-01-17 00:16:29.639742787 +0000 UTC m=+41.472171491" observedRunningTime="2026-01-17 00:16:30.713608536 +0000 UTC m=+42.546037272" watchObservedRunningTime="2026-01-17 00:16:30.71451919 +0000 UTC m=+42.546947898" Jan 17 00:16:30.802479 systemd[1]: Created slice kubepods-besteffort-pod864265cf_310b_4383_972d_cec82b8024d4.slice - libcontainer container kubepods-besteffort-pod864265cf_310b_4383_972d_cec82b8024d4.slice. Jan 17 00:16:30.871206 kubelet[2578]: I0117 00:16:30.871103 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/864265cf-310b-4383-972d-cec82b8024d4-whisker-backend-key-pair\") pod \"whisker-555ccdcf74-z7wj5\" (UID: \"864265cf-310b-4383-972d-cec82b8024d4\") " pod="calico-system/whisker-555ccdcf74-z7wj5" Jan 17 00:16:30.871567 kubelet[2578]: I0117 00:16:30.871317 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tch\" (UniqueName: \"kubernetes.io/projected/864265cf-310b-4383-972d-cec82b8024d4-kube-api-access-p2tch\") pod \"whisker-555ccdcf74-z7wj5\" (UID: \"864265cf-310b-4383-972d-cec82b8024d4\") " pod="calico-system/whisker-555ccdcf74-z7wj5" Jan 17 00:16:30.871567 kubelet[2578]: I0117 00:16:30.871363 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864265cf-310b-4383-972d-cec82b8024d4-whisker-ca-bundle\") pod \"whisker-555ccdcf74-z7wj5\" (UID: \"864265cf-310b-4383-972d-cec82b8024d4\") " pod="calico-system/whisker-555ccdcf74-z7wj5" Jan 17 00:16:31.113522 containerd[1459]: time="2026-01-17T00:16:31.113004155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555ccdcf74-z7wj5,Uid:864265cf-310b-4383-972d-cec82b8024d4,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:31.267361 systemd-networkd[1371]: cali9308a0c8e20: Link UP Jan 17 00:16:31.268824 systemd-networkd[1371]: cali9308a0c8e20: Gained carrier Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.160 [INFO][3865] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.177 [INFO][3865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0 whisker-555ccdcf74- calico-system 864265cf-310b-4383-972d-cec82b8024d4 896 0 2026-01-17 00:16:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:555ccdcf74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 whisker-555ccdcf74-z7wj5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9308a0c8e20 [] [] }} ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.177 [INFO][3865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.209 [INFO][3876] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" HandleID="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.210 [INFO][3876] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" HandleID="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"whisker-555ccdcf74-z7wj5", "timestamp":"2026-01-17 00:16:31.209881842 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.210 [INFO][3876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.210 [INFO][3876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.210 [INFO][3876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.219 [INFO][3876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.224 [INFO][3876] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.229 [INFO][3876] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.232 [INFO][3876] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.236 [INFO][3876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.236 [INFO][3876] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.238 [INFO][3876] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.245 [INFO][3876] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.252 [INFO][3876] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.65/26] block=192.168.97.64/26 handle="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.252 [INFO][3876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.65/26] handle="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.252 [INFO][3876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:31.294712 containerd[1459]: 2026-01-17 00:16:31.252 [INFO][3876] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.65/26] IPv6=[] ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" HandleID="k8s-pod-network.fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.296397 containerd[1459]: 2026-01-17 00:16:31.255 [INFO][3865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0", GenerateName:"whisker-555ccdcf74-", Namespace:"calico-system", SelfLink:"", UID:"864265cf-310b-4383-972d-cec82b8024d4", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555ccdcf74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"whisker-555ccdcf74-z7wj5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9308a0c8e20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:31.296397 containerd[1459]: 2026-01-17 00:16:31.255 [INFO][3865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.65/32] ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.296397 containerd[1459]: 2026-01-17 00:16:31.255 [INFO][3865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9308a0c8e20 ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.296397 containerd[1459]: 2026-01-17 00:16:31.268 [INFO][3865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.296397 containerd[1459]: 2026-01-17 00:16:31.270 [INFO][3865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0", GenerateName:"whisker-555ccdcf74-", Namespace:"calico-system", SelfLink:"", UID:"864265cf-310b-4383-972d-cec82b8024d4", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555ccdcf74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d", Pod:"whisker-555ccdcf74-z7wj5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9308a0c8e20", MAC:"ce:d1:52:d1:84:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:31.296397 containerd[1459]: 2026-01-17 00:16:31.289 [INFO][3865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d" Namespace="calico-system" Pod="whisker-555ccdcf74-z7wj5" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--555ccdcf74--z7wj5-eth0" Jan 17 00:16:31.324338 containerd[1459]: time="2026-01-17T00:16:31.323026019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:31.324338 containerd[1459]: time="2026-01-17T00:16:31.323911946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:31.324338 containerd[1459]: time="2026-01-17T00:16:31.323995457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:31.324338 containerd[1459]: time="2026-01-17T00:16:31.324194084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:31.351261 systemd[1]: Started cri-containerd-fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d.scope - libcontainer container fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d. Jan 17 00:16:31.412944 containerd[1459]: time="2026-01-17T00:16:31.412552022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555ccdcf74-z7wj5,Uid:864265cf-310b-4383-972d-cec82b8024d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc7f57584429c7cadb246bed60804dab5c300d02cac0e0ffbe0e2711fe838e8d\"" Jan 17 00:16:31.416784 containerd[1459]: time="2026-01-17T00:16:31.416073906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:16:31.577278 containerd[1459]: time="2026-01-17T00:16:31.577225041Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:31.579358 containerd[1459]: time="2026-01-17T00:16:31.578947246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:16:31.579358 containerd[1459]: time="2026-01-17T00:16:31.578970167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:16:31.580579 kubelet[2578]: E0117 00:16:31.580318 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:31.580579 kubelet[2578]: E0117 00:16:31.580435 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:31.582442 kubelet[2578]: E0117 00:16:31.581251 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:31.586942 containerd[1459]: time="2026-01-17T00:16:31.584583277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:16:31.729007 systemd[1]: run-containerd-runc-k8s.io-bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca-runc.xrfpGC.mount: Deactivated successfully. Jan 17 00:16:31.756691 containerd[1459]: time="2026-01-17T00:16:31.756231826Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:31.758930 containerd[1459]: time="2026-01-17T00:16:31.758776305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:16:31.760154 containerd[1459]: time="2026-01-17T00:16:31.758851470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:31.761447 kubelet[2578]: E0117 00:16:31.760420 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:31.761447 kubelet[2578]: E0117 00:16:31.760487 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:31.761447 kubelet[2578]: E0117 00:16:31.760587 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:31.761700 kubelet[2578]: E0117 00:16:31.760648 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:16:32.103087 kernel: bpftool[4073]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:16:32.353406 kubelet[2578]: I0117 00:16:32.353253 2578 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0487a21-dbe0-44a7-9d70-ec67d89290e6" path="/var/lib/kubelet/pods/c0487a21-dbe0-44a7-9d70-ec67d89290e6/volumes" Jan 17 00:16:32.409225 systemd-networkd[1371]: vxlan.calico: Link UP Jan 17 00:16:32.409243 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 17 00:16:32.685240 kubelet[2578]: E0117 00:16:32.684937 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:16:32.739553 systemd-networkd[1371]: cali9308a0c8e20: Gained IPv6LL Jan 17 00:16:33.348416 containerd[1459]: time="2026-01-17T00:16:33.347937393Z" level=info msg="StopPodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\"" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.425 [INFO][4158] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.425 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" iface="eth0" netns="/var/run/netns/cni-e7d079b4-1948-5572-2303-09dcdf2ddbec" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.425 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" iface="eth0" netns="/var/run/netns/cni-e7d079b4-1948-5572-2303-09dcdf2ddbec" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.428 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" iface="eth0" netns="/var/run/netns/cni-e7d079b4-1948-5572-2303-09dcdf2ddbec" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.428 [INFO][4158] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.428 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.472 [INFO][4165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.473 [INFO][4165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.473 [INFO][4165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.482 [WARNING][4165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.482 [INFO][4165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.486 [INFO][4165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:33.489929 containerd[1459]: 2026-01-17 00:16:33.488 [INFO][4158] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:33.493665 containerd[1459]: time="2026-01-17T00:16:33.492177945Z" level=info msg="TearDown network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" successfully" Jan 17 00:16:33.493665 containerd[1459]: time="2026-01-17T00:16:33.492224626Z" level=info msg="StopPodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" returns successfully" Jan 17 00:16:33.496833 systemd[1]: run-netns-cni\x2de7d079b4\x2d1948\x2d5572\x2d2303\x2d09dcdf2ddbec.mount: Deactivated successfully. Jan 17 00:16:33.499529 containerd[1459]: time="2026-01-17T00:16:33.499485494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwwf7,Uid:d801c454-90d8-47bb-9464-b452b91cd3db,Namespace:kube-system,Attempt:1,}" Jan 17 00:16:33.651881 systemd-networkd[1371]: calif201c6917a0: Link UP Jan 17 00:16:33.653632 systemd-networkd[1371]: calif201c6917a0: Gained carrier Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.562 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0 coredns-66bc5c9577- kube-system d801c454-90d8-47bb-9464-b452b91cd3db 922 0 2026-01-17 00:15:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 coredns-66bc5c9577-qwwf7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif201c6917a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.562 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.596 [INFO][4187] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" HandleID="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.596 [INFO][4187] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" HandleID="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"coredns-66bc5c9577-qwwf7", "timestamp":"2026-01-17 00:16:33.596384551 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.596 [INFO][4187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.597 [INFO][4187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.597 [INFO][4187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.607 [INFO][4187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.613 [INFO][4187] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.619 [INFO][4187] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.622 [INFO][4187] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.625 [INFO][4187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.625 [INFO][4187] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.628 [INFO][4187] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5 Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.635 [INFO][4187] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.643 [INFO][4187] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.66/26] block=192.168.97.64/26 handle="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.643 [INFO][4187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.66/26] handle="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.643 [INFO][4187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:33.678669 containerd[1459]: 2026-01-17 00:16:33.643 [INFO][4187] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.66/26] IPv6=[] ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" HandleID="k8s-pod-network.f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.680309 containerd[1459]: 2026-01-17 00:16:33.646 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d801c454-90d8-47bb-9464-b452b91cd3db", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"coredns-66bc5c9577-qwwf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif201c6917a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:33.680309 containerd[1459]: 2026-01-17 00:16:33.647 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.66/32] ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.680309 containerd[1459]: 2026-01-17 00:16:33.647 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif201c6917a0 ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.680309 containerd[1459]: 2026-01-17 00:16:33.653 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.680635 containerd[1459]: 2026-01-17 00:16:33.654 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d801c454-90d8-47bb-9464-b452b91cd3db", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5", Pod:"coredns-66bc5c9577-qwwf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif201c6917a0", MAC:"e6:8d:4d:41:86:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:33.680635 containerd[1459]: 2026-01-17 00:16:33.674 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5" Namespace="kube-system" Pod="coredns-66bc5c9577-qwwf7" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:33.712433 containerd[1459]: time="2026-01-17T00:16:33.711604051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:33.712433 containerd[1459]: time="2026-01-17T00:16:33.711681514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:33.712433 containerd[1459]: time="2026-01-17T00:16:33.711719257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:33.712433 containerd[1459]: time="2026-01-17T00:16:33.711998556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:33.761275 systemd[1]: Started cri-containerd-f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5.scope - libcontainer container f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5. Jan 17 00:16:33.817892 containerd[1459]: time="2026-01-17T00:16:33.817834956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwwf7,Uid:d801c454-90d8-47bb-9464-b452b91cd3db,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5\"" Jan 17 00:16:33.827237 containerd[1459]: time="2026-01-17T00:16:33.826975342Z" level=info msg="CreateContainer within sandbox \"f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:16:33.844404 containerd[1459]: time="2026-01-17T00:16:33.844343709Z" level=info msg="CreateContainer within sandbox \"f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84e370176f4d92b22e01809a2a6b5de5317d638af765bacb309394a8d33e23f7\"" Jan 17 00:16:33.846192 containerd[1459]: time="2026-01-17T00:16:33.845127977Z" level=info msg="StartContainer for \"84e370176f4d92b22e01809a2a6b5de5317d638af765bacb309394a8d33e23f7\"" Jan 17 00:16:33.883293 systemd[1]: Started cri-containerd-84e370176f4d92b22e01809a2a6b5de5317d638af765bacb309394a8d33e23f7.scope - libcontainer container 84e370176f4d92b22e01809a2a6b5de5317d638af765bacb309394a8d33e23f7. Jan 17 00:16:33.924613 containerd[1459]: time="2026-01-17T00:16:33.924427304Z" level=info msg="StartContainer for \"84e370176f4d92b22e01809a2a6b5de5317d638af765bacb309394a8d33e23f7\" returns successfully" Jan 17 00:16:34.402357 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 17 00:16:34.726333 kubelet[2578]: I0117 00:16:34.726163 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qwwf7" podStartSLOduration=40.726133939 podStartE2EDuration="40.726133939s" podCreationTimestamp="2026-01-17 00:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:34.704889667 +0000 UTC m=+46.537318385" watchObservedRunningTime="2026-01-17 00:16:34.726133939 +0000 UTC m=+46.558562645" Jan 17 00:16:35.299244 systemd-networkd[1371]: calif201c6917a0: Gained IPv6LL Jan 17 00:16:35.350082 containerd[1459]: time="2026-01-17T00:16:35.349497495Z" level=info msg="StopPodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\"" Jan 17 00:16:35.353548 containerd[1459]: time="2026-01-17T00:16:35.350887226Z" level=info msg="StopPodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\"" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.464 [INFO][4302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.464 [INFO][4302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" iface="eth0" netns="/var/run/netns/cni-68f12e2b-bb34-1eb1-9b30-48fbe1e28d43" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.465 [INFO][4302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" iface="eth0" netns="/var/run/netns/cni-68f12e2b-bb34-1eb1-9b30-48fbe1e28d43" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.465 [INFO][4302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" iface="eth0" netns="/var/run/netns/cni-68f12e2b-bb34-1eb1-9b30-48fbe1e28d43" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.465 [INFO][4302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.465 [INFO][4302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.531 [INFO][4316] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.533 [INFO][4316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.533 [INFO][4316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.556 [WARNING][4316] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.556 [INFO][4316] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.559 [INFO][4316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:35.567089 containerd[1459]: 2026-01-17 00:16:35.562 [INFO][4302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:35.568250 containerd[1459]: time="2026-01-17T00:16:35.567893763Z" level=info msg="TearDown network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" successfully" Jan 17 00:16:35.568250 containerd[1459]: time="2026-01-17T00:16:35.567938410Z" level=info msg="StopPodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" returns successfully" Jan 17 00:16:35.571392 systemd[1]: run-netns-cni\x2d68f12e2b\x2dbb34\x2d1eb1\x2d9b30\x2d48fbe1e28d43.mount: Deactivated successfully. Jan 17 00:16:35.577094 containerd[1459]: time="2026-01-17T00:16:35.576663323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8d4cfc64-6pg6z,Uid:4dd5910c-d46c-4829-af81-73c3a3c07bf1,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.479 [INFO][4301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.482 [INFO][4301] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" iface="eth0" netns="/var/run/netns/cni-7f133bf7-2f6d-1e2a-70bb-7d286d4316b7" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.483 [INFO][4301] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" iface="eth0" netns="/var/run/netns/cni-7f133bf7-2f6d-1e2a-70bb-7d286d4316b7" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.484 [INFO][4301] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" iface="eth0" netns="/var/run/netns/cni-7f133bf7-2f6d-1e2a-70bb-7d286d4316b7" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.484 [INFO][4301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.484 [INFO][4301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.553 [INFO][4321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.555 [INFO][4321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.560 [INFO][4321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.579 [WARNING][4321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.580 [INFO][4321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.584 [INFO][4321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:35.590930 containerd[1459]: 2026-01-17 00:16:35.587 [INFO][4301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:35.594517 containerd[1459]: time="2026-01-17T00:16:35.591275129Z" level=info msg="TearDown network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" successfully" Jan 17 00:16:35.594517 containerd[1459]: time="2026-01-17T00:16:35.591306874Z" level=info msg="StopPodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" returns successfully" Jan 17 00:16:35.595693 containerd[1459]: time="2026-01-17T00:16:35.595644352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49lv6,Uid:0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:35.599726 systemd[1]: run-netns-cni\x2d7f133bf7\x2d2f6d\x2d1e2a\x2d70bb\x2d7d286d4316b7.mount: Deactivated successfully. Jan 17 00:16:35.929933 systemd-networkd[1371]: calib9d1cde29b1: Link UP Jan 17 00:16:35.935902 systemd-networkd[1371]: calib9d1cde29b1: Gained carrier Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.758 [INFO][4339] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0 csi-node-driver- calico-system 0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624 943 0 2026-01-17 00:16:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 csi-node-driver-49lv6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib9d1cde29b1 [] [] }} ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.759 [INFO][4339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.845 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" HandleID="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.849 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" HandleID="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388db0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"csi-node-driver-49lv6", "timestamp":"2026-01-17 00:16:35.845422057 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.849 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.850 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.850 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.881 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.889 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.896 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.900 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.902 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.902 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.904 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.909 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.917 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.67/26] block=192.168.97.64/26 handle="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.917 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.67/26] handle="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.918 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:35.960535 containerd[1459]: 2026-01-17 00:16:35.918 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.67/26] IPv6=[] ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" HandleID="k8s-pod-network.4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.962736 containerd[1459]: 2026-01-17 00:16:35.921 [INFO][4339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"csi-node-driver-49lv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9d1cde29b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:35.962736 containerd[1459]: 2026-01-17 00:16:35.921 [INFO][4339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.67/32] ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.962736 containerd[1459]: 2026-01-17 00:16:35.921 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9d1cde29b1 ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.962736 containerd[1459]: 2026-01-17 00:16:35.937 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:35.962736 containerd[1459]: 2026-01-17 00:16:35.939 [INFO][4339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b", Pod:"csi-node-driver-49lv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9d1cde29b1", MAC:"b6:63:bb:93:c6:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:35.962736 containerd[1459]: 2026-01-17 00:16:35.956 [INFO][4339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b" Namespace="calico-system" Pod="csi-node-driver-49lv6" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:36.010736 containerd[1459]: time="2026-01-17T00:16:36.010592599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:36.011204 containerd[1459]: time="2026-01-17T00:16:36.010951263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:36.011609 containerd[1459]: time="2026-01-17T00:16:36.011406086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:36.012213 containerd[1459]: time="2026-01-17T00:16:36.011997181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:36.058255 systemd[1]: Started cri-containerd-4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b.scope - libcontainer container 4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b. Jan 17 00:16:36.076907 systemd-networkd[1371]: cali2839b08f255: Link UP Jan 17 00:16:36.078204 systemd-networkd[1371]: cali2839b08f255: Gained carrier Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.733 [INFO][4329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0 calico-kube-controllers-5b8d4cfc64- calico-system 4dd5910c-d46c-4829-af81-73c3a3c07bf1 942 0 2026-01-17 00:16:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b8d4cfc64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 calico-kube-controllers-5b8d4cfc64-6pg6z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2839b08f255 [] [] }} ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.734 [INFO][4329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.870 [INFO][4352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" HandleID="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.875 [INFO][4352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" HandleID="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122c10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"calico-kube-controllers-5b8d4cfc64-6pg6z", "timestamp":"2026-01-17 00:16:35.868033834 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.875 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.918 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.918 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.983 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:35.998 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.017 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.037 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.042 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.043 [INFO][4352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.045 [INFO][4352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.053 [INFO][4352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.066 [INFO][4352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.68/26] block=192.168.97.64/26 handle="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.066 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.68/26] handle="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.066 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:36.111499 containerd[1459]: 2026-01-17 00:16:36.067 [INFO][4352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.68/26] IPv6=[] ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" HandleID="k8s-pod-network.ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.113673 containerd[1459]: 2026-01-17 00:16:36.071 [INFO][4329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0", GenerateName:"calico-kube-controllers-5b8d4cfc64-", Namespace:"calico-system", SelfLink:"", UID:"4dd5910c-d46c-4829-af81-73c3a3c07bf1", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8d4cfc64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"calico-kube-controllers-5b8d4cfc64-6pg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2839b08f255", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:36.113673 containerd[1459]: 2026-01-17 00:16:36.071 [INFO][4329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.68/32] ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.113673 containerd[1459]: 2026-01-17 00:16:36.071 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2839b08f255 ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.113673 containerd[1459]: 2026-01-17 00:16:36.075 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.113673 containerd[1459]: 2026-01-17 00:16:36.076 [INFO][4329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0", GenerateName:"calico-kube-controllers-5b8d4cfc64-", Namespace:"calico-system", SelfLink:"", UID:"4dd5910c-d46c-4829-af81-73c3a3c07bf1", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8d4cfc64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d", Pod:"calico-kube-controllers-5b8d4cfc64-6pg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2839b08f255", MAC:"46:90:57:10:60:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:36.113673 containerd[1459]: 2026-01-17 00:16:36.107 [INFO][4329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d" Namespace="calico-system" Pod="calico-kube-controllers-5b8d4cfc64-6pg6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:36.167022 containerd[1459]: time="2026-01-17T00:16:36.166918002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:36.168276 containerd[1459]: time="2026-01-17T00:16:36.168175812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:36.168670 containerd[1459]: time="2026-01-17T00:16:36.168470146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:36.168822 containerd[1459]: time="2026-01-17T00:16:36.168646967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:36.199812 containerd[1459]: time="2026-01-17T00:16:36.199611465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49lv6,Uid:0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b\"" Jan 17 00:16:36.212739 containerd[1459]: time="2026-01-17T00:16:36.212228965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:16:36.212923 systemd[1]: Started cri-containerd-ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d.scope - libcontainer container ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d. Jan 17 00:16:36.298016 containerd[1459]: time="2026-01-17T00:16:36.297840288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b8d4cfc64-6pg6z,Uid:4dd5910c-d46c-4829-af81-73c3a3c07bf1,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d\"" Jan 17 00:16:36.349178 containerd[1459]: time="2026-01-17T00:16:36.349128221Z" level=info msg="StopPodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\"" Jan 17 00:16:36.350806 containerd[1459]: time="2026-01-17T00:16:36.350706447Z" level=info msg="StopPodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\"" Jan 17 00:16:36.377711 containerd[1459]: time="2026-01-17T00:16:36.377143571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:36.379255 containerd[1459]: time="2026-01-17T00:16:36.379200334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:16:36.379946 containerd[1459]: time="2026-01-17T00:16:36.379895281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:16:36.380314 kubelet[2578]: E0117 00:16:36.380269 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:36.380975 kubelet[2578]: E0117 00:16:36.380323 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:36.381154 containerd[1459]: time="2026-01-17T00:16:36.380727577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:16:36.381260 kubelet[2578]: E0117 00:16:36.381120 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:36.551791 containerd[1459]: time="2026-01-17T00:16:36.551717496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:36.553251 containerd[1459]: time="2026-01-17T00:16:36.553191970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:16:36.553456 containerd[1459]: time="2026-01-17T00:16:36.553326573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:36.553681 kubelet[2578]: E0117 00:16:36.553619 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:36.553787 kubelet[2578]: E0117 00:16:36.553694 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:36.554217 kubelet[2578]: E0117 00:16:36.553969 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b8d4cfc64-6pg6z_calico-system(4dd5910c-d46c-4829-af81-73c3a3c07bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:36.554217 kubelet[2578]: E0117 00:16:36.554028 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:16:36.555634 containerd[1459]: time="2026-01-17T00:16:36.554690630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.457 [INFO][4483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.462 [INFO][4483] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" iface="eth0" netns="/var/run/netns/cni-85be3cb1-eb22-c880-4a6a-623406b2d337" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.463 [INFO][4483] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" iface="eth0" netns="/var/run/netns/cni-85be3cb1-eb22-c880-4a6a-623406b2d337" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.466 [INFO][4483] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" iface="eth0" netns="/var/run/netns/cni-85be3cb1-eb22-c880-4a6a-623406b2d337" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.467 [INFO][4483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.469 [INFO][4483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.543 [INFO][4502] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.544 [INFO][4502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.544 [INFO][4502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.555 [WARNING][4502] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.555 [INFO][4502] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.558 [INFO][4502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:36.567523 containerd[1459]: 2026-01-17 00:16:36.563 [INFO][4483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:36.568221 containerd[1459]: time="2026-01-17T00:16:36.567760496Z" level=info msg="TearDown network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" successfully" Jan 17 00:16:36.568221 containerd[1459]: time="2026-01-17T00:16:36.567985466Z" level=info msg="StopPodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" returns successfully" Jan 17 00:16:36.576139 containerd[1459]: time="2026-01-17T00:16:36.574679358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfdnh,Uid:2ecb4038-1a10-453e-a0f6-362231e5785b,Namespace:kube-system,Attempt:1,}" Jan 17 00:16:36.589053 systemd[1]: run-netns-cni\x2d85be3cb1\x2deb22\x2dc880\x2d4a6a\x2d623406b2d337.mount: Deactivated successfully. Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.456 [INFO][4487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.457 [INFO][4487] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" iface="eth0" netns="/var/run/netns/cni-6965ec82-d356-af96-fa39-9b7065fa1ee1" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.457 [INFO][4487] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" iface="eth0" netns="/var/run/netns/cni-6965ec82-d356-af96-fa39-9b7065fa1ee1" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.458 [INFO][4487] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" iface="eth0" netns="/var/run/netns/cni-6965ec82-d356-af96-fa39-9b7065fa1ee1" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.458 [INFO][4487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.458 [INFO][4487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.548 [INFO][4499] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.549 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.558 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.593 [WARNING][4499] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.593 [INFO][4499] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.595 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:36.600627 containerd[1459]: 2026-01-17 00:16:36.597 [INFO][4487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:36.603147 containerd[1459]: time="2026-01-17T00:16:36.600804958Z" level=info msg="TearDown network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" successfully" Jan 17 00:16:36.603147 containerd[1459]: time="2026-01-17T00:16:36.600836852Z" level=info msg="StopPodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" returns successfully" Jan 17 00:16:36.608860 containerd[1459]: time="2026-01-17T00:16:36.605751242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-rd5bl,Uid:e768df9c-0c67-442b-b814-3828e727eb5c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:16:36.608218 systemd[1]: run-netns-cni\x2d6965ec82\x2dd356\x2daf96\x2dfa39\x2d9b7065fa1ee1.mount: Deactivated successfully. Jan 17 00:16:36.717390 kubelet[2578]: E0117 00:16:36.717202 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:16:36.729225 containerd[1459]: time="2026-01-17T00:16:36.729171684Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:36.733027 containerd[1459]: time="2026-01-17T00:16:36.732850220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:16:36.733489 containerd[1459]: time="2026-01-17T00:16:36.732898052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:16:36.733811 kubelet[2578]: E0117 00:16:36.733763 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:36.733811 kubelet[2578]: E0117 00:16:36.733820 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:36.734799 kubelet[2578]: E0117 00:16:36.734741 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:36.737130 kubelet[2578]: E0117 00:16:36.734820 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:36.864084 systemd-networkd[1371]: cali3dce2411e25: Link UP Jan 17 00:16:36.869907 systemd-networkd[1371]: cali3dce2411e25: Gained carrier Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.695 [INFO][4513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0 coredns-66bc5c9577- kube-system 2ecb4038-1a10-453e-a0f6-362231e5785b 959 0 2026-01-17 00:15:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 coredns-66bc5c9577-nfdnh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3dce2411e25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.695 [INFO][4513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.788 [INFO][4539] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" HandleID="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.788 [INFO][4539] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" HandleID="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c690), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"coredns-66bc5c9577-nfdnh", "timestamp":"2026-01-17 00:16:36.78824593 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.788 [INFO][4539] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.788 [INFO][4539] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.788 [INFO][4539] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.801 [INFO][4539] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.811 [INFO][4539] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.818 [INFO][4539] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.821 [INFO][4539] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.826 [INFO][4539] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.826 [INFO][4539] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.828 [INFO][4539] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190 Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.838 [INFO][4539] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.846 [INFO][4539] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.69/26] block=192.168.97.64/26 handle="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.846 [INFO][4539] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.69/26] handle="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.846 [INFO][4539] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:36.905113 containerd[1459]: 2026-01-17 00:16:36.846 [INFO][4539] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.69/26] IPv6=[] ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" HandleID="k8s-pod-network.7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.906379 containerd[1459]: 2026-01-17 00:16:36.851 [INFO][4513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2ecb4038-1a10-453e-a0f6-362231e5785b", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"coredns-66bc5c9577-nfdnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dce2411e25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:36.906379 containerd[1459]: 2026-01-17 00:16:36.852 [INFO][4513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.69/32] ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.906379 containerd[1459]: 2026-01-17 00:16:36.852 [INFO][4513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dce2411e25 ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.906379 containerd[1459]: 2026-01-17 00:16:36.872 [INFO][4513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.906690 containerd[1459]: 2026-01-17 00:16:36.875 [INFO][4513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2ecb4038-1a10-453e-a0f6-362231e5785b", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190", Pod:"coredns-66bc5c9577-nfdnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dce2411e25", MAC:"ee:5b:be:29:18:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:36.906690 containerd[1459]: 2026-01-17 00:16:36.898 [INFO][4513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190" Namespace="kube-system" Pod="coredns-66bc5c9577-nfdnh" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:36.963781 containerd[1459]: time="2026-01-17T00:16:36.963027180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:36.963781 containerd[1459]: time="2026-01-17T00:16:36.963145375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:36.963781 containerd[1459]: time="2026-01-17T00:16:36.963174933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:36.963781 containerd[1459]: time="2026-01-17T00:16:36.963304238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:37.003887 systemd-networkd[1371]: calic716dd89219: Link UP Jan 17 00:16:37.005469 systemd-networkd[1371]: calic716dd89219: Gained carrier Jan 17 00:16:37.020293 systemd[1]: Started cri-containerd-7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190.scope - libcontainer container 7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190. Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.714 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0 calico-apiserver-85bf985ffc- calico-apiserver e768df9c-0c67-442b-b814-3828e727eb5c 960 0 2026-01-17 00:16:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85bf985ffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 calico-apiserver-85bf985ffc-rd5bl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic716dd89219 [] [] }} ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.715 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.808 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" HandleID="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.810 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" HandleID="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f7f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"calico-apiserver-85bf985ffc-rd5bl", "timestamp":"2026-01-17 00:16:36.808431073 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.810 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.847 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.847 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.903 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.915 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.928 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.932 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.939 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.939 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.945 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7 Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.960 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.984 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.70/26] block=192.168.97.64/26 handle="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.984 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.70/26] handle="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.984 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:37.049177 containerd[1459]: 2026-01-17 00:16:36.984 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.70/26] IPv6=[] ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" HandleID="k8s-pod-network.e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.053408 containerd[1459]: 2026-01-17 00:16:36.994 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e768df9c-0c67-442b-b814-3828e727eb5c", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"calico-apiserver-85bf985ffc-rd5bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic716dd89219", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:37.053408 containerd[1459]: 2026-01-17 00:16:36.995 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.70/32] ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.053408 containerd[1459]: 2026-01-17 00:16:36.996 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic716dd89219 ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.053408 containerd[1459]: 2026-01-17 00:16:37.008 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.053408 containerd[1459]: 2026-01-17 00:16:37.010 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e768df9c-0c67-442b-b814-3828e727eb5c", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7", Pod:"calico-apiserver-85bf985ffc-rd5bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic716dd89219", MAC:"7a:87:eb:bb:0f:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:37.053408 containerd[1459]: 2026-01-17 00:16:37.041 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-rd5bl" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:37.087677 containerd[1459]: time="2026-01-17T00:16:37.087498430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:37.087819 containerd[1459]: time="2026-01-17T00:16:37.087730347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:37.087880 containerd[1459]: time="2026-01-17T00:16:37.087806966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:37.088999 containerd[1459]: time="2026-01-17T00:16:37.088139516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:37.129683 containerd[1459]: time="2026-01-17T00:16:37.129542622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfdnh,Uid:2ecb4038-1a10-453e-a0f6-362231e5785b,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190\"" Jan 17 00:16:37.138379 systemd[1]: Started cri-containerd-e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7.scope - libcontainer container e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7. Jan 17 00:16:37.148415 containerd[1459]: time="2026-01-17T00:16:37.148212017Z" level=info msg="CreateContainer within sandbox \"7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:16:37.167185 containerd[1459]: time="2026-01-17T00:16:37.166984406Z" level=info msg="CreateContainer within sandbox \"7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4416aa971f87517b60101e9c48db2ef6f56ef84827ef522b99fc481bbe6df1a7\"" Jan 17 00:16:37.169804 containerd[1459]: time="2026-01-17T00:16:37.169566216Z" level=info msg="StartContainer for \"4416aa971f87517b60101e9c48db2ef6f56ef84827ef522b99fc481bbe6df1a7\"" Jan 17 00:16:37.213257 systemd[1]: Started cri-containerd-4416aa971f87517b60101e9c48db2ef6f56ef84827ef522b99fc481bbe6df1a7.scope - libcontainer container 4416aa971f87517b60101e9c48db2ef6f56ef84827ef522b99fc481bbe6df1a7. Jan 17 00:16:37.257441 containerd[1459]: time="2026-01-17T00:16:37.257203911Z" level=info msg="StartContainer for \"4416aa971f87517b60101e9c48db2ef6f56ef84827ef522b99fc481bbe6df1a7\" returns successfully" Jan 17 00:16:37.282110 containerd[1459]: time="2026-01-17T00:16:37.281935440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-rd5bl,Uid:e768df9c-0c67-442b-b814-3828e727eb5c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7\"" Jan 17 00:16:37.286944 containerd[1459]: time="2026-01-17T00:16:37.286890647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:37.352775 containerd[1459]: time="2026-01-17T00:16:37.352584142Z" level=info msg="StopPodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\"" Jan 17 00:16:37.447825 containerd[1459]: time="2026-01-17T00:16:37.446622849Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:37.448643 containerd[1459]: time="2026-01-17T00:16:37.448576821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:37.449154 kubelet[2578]: E0117 00:16:37.448955 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:37.449154 kubelet[2578]: E0117 00:16:37.449007 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:37.449154 kubelet[2578]: E0117 00:16:37.449120 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-rd5bl_calico-apiserver(e768df9c-0c67-442b-b814-3828e727eb5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:37.450378 kubelet[2578]: E0117 00:16:37.449169 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:16:37.450693 containerd[1459]: time="2026-01-17T00:16:37.450170400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.414 [INFO][4695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.415 [INFO][4695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" iface="eth0" netns="/var/run/netns/cni-0d09ec38-b99c-190e-fa63-ca2bc29797a2" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.416 [INFO][4695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" iface="eth0" netns="/var/run/netns/cni-0d09ec38-b99c-190e-fa63-ca2bc29797a2" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.416 [INFO][4695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" iface="eth0" netns="/var/run/netns/cni-0d09ec38-b99c-190e-fa63-ca2bc29797a2" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.416 [INFO][4695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.416 [INFO][4695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.455 [INFO][4702] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.455 [INFO][4702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.455 [INFO][4702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.464 [WARNING][4702] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.464 [INFO][4702] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.466 [INFO][4702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:37.469800 containerd[1459]: 2026-01-17 00:16:37.468 [INFO][4695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:37.470861 containerd[1459]: time="2026-01-17T00:16:37.469971909Z" level=info msg="TearDown network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" successfully" Jan 17 00:16:37.470861 containerd[1459]: time="2026-01-17T00:16:37.470004011Z" level=info msg="StopPodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" returns successfully" Jan 17 00:16:37.472988 containerd[1459]: time="2026-01-17T00:16:37.472946691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-kdf8q,Uid:53c5293b-6a33-4d3c-b982-707b2d5a0fd8,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:16:37.587683 systemd[1]: run-netns-cni\x2d0d09ec38\x2db99c\x2d190e\x2dfa63\x2dca2bc29797a2.mount: Deactivated successfully. Jan 17 00:16:37.602426 systemd-networkd[1371]: calib9d1cde29b1: Gained IPv6LL Jan 17 00:16:37.635876 systemd-networkd[1371]: cali727f4791778: Link UP Jan 17 00:16:37.639160 systemd-networkd[1371]: cali727f4791778: Gained carrier Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.531 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0 calico-apiserver-85bf985ffc- calico-apiserver 53c5293b-6a33-4d3c-b982-707b2d5a0fd8 982 0 2026-01-17 00:16:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85bf985ffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 calico-apiserver-85bf985ffc-kdf8q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali727f4791778 [] [] }} ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.532 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.566 [INFO][4720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" HandleID="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.566 [INFO][4720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" HandleID="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"calico-apiserver-85bf985ffc-kdf8q", "timestamp":"2026-01-17 00:16:37.566656691 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.567 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.567 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.567 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.579 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.591 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.597 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.599 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.605 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.605 [INFO][4720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.608 [INFO][4720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.616 [INFO][4720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.625 [INFO][4720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.71/26] block=192.168.97.64/26 handle="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.625 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.71/26] handle="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.625 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:37.660752 containerd[1459]: 2026-01-17 00:16:37.625 [INFO][4720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.71/26] IPv6=[] ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" HandleID="k8s-pod-network.6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.663995 containerd[1459]: 2026-01-17 00:16:37.628 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"53c5293b-6a33-4d3c-b982-707b2d5a0fd8", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"calico-apiserver-85bf985ffc-kdf8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali727f4791778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:37.663995 containerd[1459]: 2026-01-17 00:16:37.628 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.71/32] ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.663995 containerd[1459]: 2026-01-17 00:16:37.628 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali727f4791778 ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.663995 containerd[1459]: 2026-01-17 00:16:37.640 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.663995 containerd[1459]: 2026-01-17 00:16:37.641 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"53c5293b-6a33-4d3c-b982-707b2d5a0fd8", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca", Pod:"calico-apiserver-85bf985ffc-kdf8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali727f4791778", MAC:"16:ce:e9:43:73:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:37.663995 containerd[1459]: 2026-01-17 00:16:37.656 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca" Namespace="calico-apiserver" Pod="calico-apiserver-85bf985ffc-kdf8q" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:37.693829 containerd[1459]: time="2026-01-17T00:16:37.693498679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:37.693829 containerd[1459]: time="2026-01-17T00:16:37.693697167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:37.694151 containerd[1459]: time="2026-01-17T00:16:37.693794454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:37.695423 containerd[1459]: time="2026-01-17T00:16:37.695026777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:37.739821 kubelet[2578]: E0117 00:16:37.739696 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:16:37.748867 kubelet[2578]: E0117 00:16:37.747212 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:16:37.761936 kubelet[2578]: E0117 00:16:37.761879 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:37.786095 kubelet[2578]: I0117 00:16:37.783155 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nfdnh" podStartSLOduration=43.783133184 podStartE2EDuration="43.783133184s" podCreationTimestamp="2026-01-17 00:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:37.757840448 +0000 UTC m=+49.590269153" watchObservedRunningTime="2026-01-17 00:16:37.783133184 +0000 UTC m=+49.615561888" Jan 17 00:16:37.788105 systemd[1]: run-containerd-runc-k8s.io-6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca-runc.zXfgR6.mount: Deactivated successfully. Jan 17 00:16:37.804503 systemd[1]: Started cri-containerd-6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca.scope - libcontainer container 6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca. Jan 17 00:16:37.987882 containerd[1459]: time="2026-01-17T00:16:37.987735094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bf985ffc-kdf8q,Uid:53c5293b-6a33-4d3c-b982-707b2d5a0fd8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca\"" Jan 17 00:16:37.991974 containerd[1459]: time="2026-01-17T00:16:37.991299333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:38.114726 systemd-networkd[1371]: cali2839b08f255: Gained IPv6LL Jan 17 00:16:38.165764 containerd[1459]: time="2026-01-17T00:16:38.165706830Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:38.167234 containerd[1459]: time="2026-01-17T00:16:38.167171722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:38.167384 containerd[1459]: time="2026-01-17T00:16:38.167272276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:38.167509 kubelet[2578]: E0117 00:16:38.167459 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:38.167604 kubelet[2578]: E0117 00:16:38.167523 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:38.167660 kubelet[2578]: E0117 00:16:38.167631 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-kdf8q_calico-apiserver(53c5293b-6a33-4d3c-b982-707b2d5a0fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:38.167734 kubelet[2578]: E0117 00:16:38.167684 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:16:38.351237 containerd[1459]: time="2026-01-17T00:16:38.350516872Z" level=info msg="StopPodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\"" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.408 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.408 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" iface="eth0" netns="/var/run/netns/cni-ffea6b50-336c-6ffe-3974-0e92cbb07e42" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.410 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" iface="eth0" netns="/var/run/netns/cni-ffea6b50-336c-6ffe-3974-0e92cbb07e42" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.412 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" iface="eth0" netns="/var/run/netns/cni-ffea6b50-336c-6ffe-3974-0e92cbb07e42" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.412 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.412 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.463 [INFO][4801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.463 [INFO][4801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.463 [INFO][4801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.478 [WARNING][4801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.478 [INFO][4801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.480 [INFO][4801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:38.484184 containerd[1459]: 2026-01-17 00:16:38.482 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:38.487312 containerd[1459]: time="2026-01-17T00:16:38.486135663Z" level=info msg="TearDown network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" successfully" Jan 17 00:16:38.487312 containerd[1459]: time="2026-01-17T00:16:38.486182355Z" level=info msg="StopPodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" returns successfully" Jan 17 00:16:38.491412 systemd[1]: run-netns-cni\x2dffea6b50\x2d336c\x2d6ffe\x2d3974\x2d0e92cbb07e42.mount: Deactivated successfully. Jan 17 00:16:38.492614 containerd[1459]: time="2026-01-17T00:16:38.492574717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zts27,Uid:ed571de0-820f-44a5-8d65-cc57b2d7af22,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:38.562448 systemd-networkd[1371]: cali3dce2411e25: Gained IPv6LL Jan 17 00:16:38.672293 systemd-networkd[1371]: calib74defe4d34: Link UP Jan 17 00:16:38.672662 systemd-networkd[1371]: calib74defe4d34: Gained carrier Jan 17 00:16:38.693909 systemd-networkd[1371]: calic716dd89219: Gained IPv6LL Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.547 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0 goldmane-7c778bb748- calico-system ed571de0-820f-44a5-8d65-cc57b2d7af22 1016 0 2026-01-17 00:16:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3 goldmane-7c778bb748-zts27 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib74defe4d34 [] [] }} ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.547 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.600 [INFO][4819] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" HandleID="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.600 [INFO][4819] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" HandleID="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", "pod":"goldmane-7c778bb748-zts27", "timestamp":"2026-01-17 00:16:38.60010963 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.600 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.600 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.600 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3' Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.617 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.629 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.635 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.638 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.643 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.643 [INFO][4819] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.648 [INFO][4819] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04 Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.653 [INFO][4819] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.663 [INFO][4819] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.72/26] block=192.168.97.64/26 handle="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.663 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.72/26] handle="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" host="ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3" Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.663 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:38.702690 containerd[1459]: 2026-01-17 00:16:38.663 [INFO][4819] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.72/26] IPv6=[] ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" HandleID="k8s-pod-network.327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.706080 containerd[1459]: 2026-01-17 00:16:38.667 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ed571de0-820f-44a5-8d65-cc57b2d7af22", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"", Pod:"goldmane-7c778bb748-zts27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib74defe4d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:38.706080 containerd[1459]: 2026-01-17 00:16:38.667 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.72/32] ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.706080 containerd[1459]: 2026-01-17 00:16:38.667 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib74defe4d34 ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.706080 containerd[1459]: 2026-01-17 00:16:38.672 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.706080 containerd[1459]: 2026-01-17 00:16:38.673 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ed571de0-820f-44a5-8d65-cc57b2d7af22", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04", Pod:"goldmane-7c778bb748-zts27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib74defe4d34", MAC:"22:40:14:c6:6e:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:38.706080 containerd[1459]: 2026-01-17 00:16:38.689 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04" Namespace="calico-system" Pod="goldmane-7c778bb748-zts27" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:38.748425 containerd[1459]: time="2026-01-17T00:16:38.748175686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:38.748425 containerd[1459]: time="2026-01-17T00:16:38.748269963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:38.748425 containerd[1459]: time="2026-01-17T00:16:38.748304046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:38.749107 containerd[1459]: time="2026-01-17T00:16:38.749030642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:38.759122 kubelet[2578]: E0117 00:16:38.758954 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:16:38.760880 kubelet[2578]: E0117 00:16:38.760475 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:16:38.829265 systemd[1]: Started cri-containerd-327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04.scope - libcontainer container 327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04. Jan 17 00:16:38.901148 containerd[1459]: time="2026-01-17T00:16:38.901026690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zts27,Uid:ed571de0-820f-44a5-8d65-cc57b2d7af22,Namespace:calico-system,Attempt:1,} returns sandbox id \"327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04\"" Jan 17 00:16:38.906032 containerd[1459]: time="2026-01-17T00:16:38.905734819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:16:39.067581 containerd[1459]: time="2026-01-17T00:16:39.067523744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:39.069096 containerd[1459]: time="2026-01-17T00:16:39.069011664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:16:39.069304 containerd[1459]: time="2026-01-17T00:16:39.069059649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:39.069551 kubelet[2578]: E0117 00:16:39.069332 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:39.069659 kubelet[2578]: E0117 00:16:39.069572 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:39.070078 kubelet[2578]: E0117 00:16:39.070021 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zts27_calico-system(ed571de0-820f-44a5-8d65-cc57b2d7af22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:39.070224 kubelet[2578]: E0117 00:16:39.070114 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:16:39.266836 systemd-networkd[1371]: cali727f4791778: Gained IPv6LL Jan 17 00:16:39.767405 kubelet[2578]: E0117 00:16:39.765744 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:16:39.767405 kubelet[2578]: E0117 00:16:39.767323 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:16:40.355280 systemd-networkd[1371]: calib74defe4d34: Gained IPv6LL Jan 17 00:16:40.773089 kubelet[2578]: E0117 00:16:40.772340 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:16:42.601579 ntpd[1428]: Listen normally on 8 vxlan.calico 192.168.97.64:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 8 vxlan.calico 192.168.97.64:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 9 cali9308a0c8e20 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 10 vxlan.calico [fe80::6470:68ff:fe83:6a5a%5]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 11 calif201c6917a0 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 12 calib9d1cde29b1 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 13 cali2839b08f255 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 14 cali3dce2411e25 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 15 calic716dd89219 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:16:42.602237 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 16 cali727f4791778 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:16:42.601726 ntpd[1428]: Listen normally on 9 cali9308a0c8e20 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:16:42.602593 ntpd[1428]: 17 Jan 00:16:42 ntpd[1428]: Listen normally on 17 calib74defe4d34 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:16:42.601811 ntpd[1428]: Listen normally on 10 vxlan.calico [fe80::6470:68ff:fe83:6a5a%5]:123 Jan 17 00:16:42.601875 ntpd[1428]: Listen normally on 11 calif201c6917a0 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:16:42.601933 ntpd[1428]: Listen normally on 12 calib9d1cde29b1 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:16:42.601993 ntpd[1428]: Listen normally on 13 cali2839b08f255 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:16:42.602086 ntpd[1428]: Listen normally on 14 cali3dce2411e25 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:16:42.602150 ntpd[1428]: Listen normally on 15 calic716dd89219 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:16:42.602205 ntpd[1428]: Listen normally on 16 cali727f4791778 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:16:42.602275 ntpd[1428]: Listen normally on 17 calib74defe4d34 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:16:46.351371 containerd[1459]: time="2026-01-17T00:16:46.351312384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:16:46.519301 containerd[1459]: time="2026-01-17T00:16:46.519234145Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:46.521150 containerd[1459]: time="2026-01-17T00:16:46.521092543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:16:46.521391 containerd[1459]: time="2026-01-17T00:16:46.521113245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:16:46.521449 kubelet[2578]: E0117 00:16:46.521357 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:46.521449 kubelet[2578]: E0117 00:16:46.521411 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:46.522035 kubelet[2578]: E0117 00:16:46.521506 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:46.524441 containerd[1459]: time="2026-01-17T00:16:46.524402369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:16:46.684385 containerd[1459]: time="2026-01-17T00:16:46.684215858Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:46.685783 containerd[1459]: time="2026-01-17T00:16:46.685723297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:16:46.685983 containerd[1459]: time="2026-01-17T00:16:46.685746148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:46.686153 kubelet[2578]: E0117 00:16:46.686106 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:46.686332 kubelet[2578]: E0117 00:16:46.686170 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:46.686332 kubelet[2578]: E0117 00:16:46.686270 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:46.686515 kubelet[2578]: E0117 00:16:46.686338 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:16:48.340028 containerd[1459]: time="2026-01-17T00:16:48.339975949Z" level=info msg="StopPodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\"" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.398 [WARNING][4900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ed571de0-820f-44a5-8d65-cc57b2d7af22", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04", Pod:"goldmane-7c778bb748-zts27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib74defe4d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.399 [INFO][4900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.399 [INFO][4900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" iface="eth0" netns="" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.399 [INFO][4900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.399 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.429 [INFO][4909] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.429 [INFO][4909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.430 [INFO][4909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.441 [WARNING][4909] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.441 [INFO][4909] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.443 [INFO][4909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.450918 containerd[1459]: 2026-01-17 00:16:48.447 [INFO][4900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.450918 containerd[1459]: time="2026-01-17T00:16:48.450085560Z" level=info msg="TearDown network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" successfully" Jan 17 00:16:48.450918 containerd[1459]: time="2026-01-17T00:16:48.450119041Z" level=info msg="StopPodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" returns successfully" Jan 17 00:16:48.450918 containerd[1459]: time="2026-01-17T00:16:48.450751412Z" level=info msg="RemovePodSandbox for \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\"" Jan 17 00:16:48.450918 containerd[1459]: time="2026-01-17T00:16:48.450793233Z" level=info msg="Forcibly stopping sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\"" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.501 [WARNING][4923] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ed571de0-820f-44a5-8d65-cc57b2d7af22", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"327a5b44922c0f389a03635d484a8406ec544015ef735ebfe2c0a1749f9efc04", Pod:"goldmane-7c778bb748-zts27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib74defe4d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.501 [INFO][4923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.502 [INFO][4923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" iface="eth0" netns="" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.502 [INFO][4923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.502 [INFO][4923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.527 [INFO][4931] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.528 [INFO][4931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.528 [INFO][4931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.536 [WARNING][4931] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.536 [INFO][4931] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" HandleID="k8s-pod-network.80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-goldmane--7c778bb748--zts27-eth0" Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.537 [INFO][4931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.540685 containerd[1459]: 2026-01-17 00:16:48.539 [INFO][4923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6" Jan 17 00:16:48.541523 containerd[1459]: time="2026-01-17T00:16:48.540734085Z" level=info msg="TearDown network for sandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" successfully" Jan 17 00:16:48.545411 containerd[1459]: time="2026-01-17T00:16:48.545352874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:48.545614 containerd[1459]: time="2026-01-17T00:16:48.545433485Z" level=info msg="RemovePodSandbox \"80959e3297378e84455e23c1db7ea7da54edc5ef2b2a07124dabf54ed098fff6\" returns successfully" Jan 17 00:16:48.546203 containerd[1459]: time="2026-01-17T00:16:48.546137492Z" level=info msg="StopPodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\"" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.589 [WARNING][4945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0", GenerateName:"calico-kube-controllers-5b8d4cfc64-", Namespace:"calico-system", SelfLink:"", UID:"4dd5910c-d46c-4829-af81-73c3a3c07bf1", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8d4cfc64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d", Pod:"calico-kube-controllers-5b8d4cfc64-6pg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2839b08f255", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.590 [INFO][4945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.590 [INFO][4945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" iface="eth0" netns="" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.590 [INFO][4945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.590 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.616 [INFO][4952] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.616 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.616 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.625 [WARNING][4952] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.625 [INFO][4952] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.627 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.631446 containerd[1459]: 2026-01-17 00:16:48.629 [INFO][4945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.631446 containerd[1459]: time="2026-01-17T00:16:48.631332219Z" level=info msg="TearDown network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" successfully" Jan 17 00:16:48.631446 containerd[1459]: time="2026-01-17T00:16:48.631366751Z" level=info msg="StopPodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" returns successfully" Jan 17 00:16:48.634424 containerd[1459]: time="2026-01-17T00:16:48.633555508Z" level=info msg="RemovePodSandbox for \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\"" Jan 17 00:16:48.634424 containerd[1459]: time="2026-01-17T00:16:48.633594014Z" level=info msg="Forcibly stopping sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\"" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.694 [WARNING][4966] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0", GenerateName:"calico-kube-controllers-5b8d4cfc64-", Namespace:"calico-system", SelfLink:"", UID:"4dd5910c-d46c-4829-af81-73c3a3c07bf1", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b8d4cfc64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"ba8942e381d4a9e727a364cc3af049c0e79c3cbfb00e771d4fb6796b37fa383d", Pod:"calico-kube-controllers-5b8d4cfc64-6pg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2839b08f255", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.696 [INFO][4966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.696 [INFO][4966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" iface="eth0" netns="" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.696 [INFO][4966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.696 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.721 [INFO][4974] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.721 [INFO][4974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.721 [INFO][4974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.731 [WARNING][4974] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.731 [INFO][4974] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" HandleID="k8s-pod-network.d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--kube--controllers--5b8d4cfc64--6pg6z-eth0" Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.733 [INFO][4974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.737089 containerd[1459]: 2026-01-17 00:16:48.734 [INFO][4966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba" Jan 17 00:16:48.737089 containerd[1459]: time="2026-01-17T00:16:48.736427593Z" level=info msg="TearDown network for sandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" successfully" Jan 17 00:16:48.741958 containerd[1459]: time="2026-01-17T00:16:48.741904811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:48.742078 containerd[1459]: time="2026-01-17T00:16:48.742008907Z" level=info msg="RemovePodSandbox \"d9b8c0f40fc0b0b5f14a2f8910dda9604f458d8d55fb5e177240fb88ad50f9ba\" returns successfully" Jan 17 00:16:48.743063 containerd[1459]: time="2026-01-17T00:16:48.742633627Z" level=info msg="StopPodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\"" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.786 [WARNING][4988] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"53c5293b-6a33-4d3c-b982-707b2d5a0fd8", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca", Pod:"calico-apiserver-85bf985ffc-kdf8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali727f4791778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.786 [INFO][4988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.786 [INFO][4988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" iface="eth0" netns="" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.786 [INFO][4988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.786 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.821 [INFO][4996] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.821 [INFO][4996] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.821 [INFO][4996] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.834 [WARNING][4996] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.834 [INFO][4996] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.836 [INFO][4996] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.839269 containerd[1459]: 2026-01-17 00:16:48.837 [INFO][4988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.839870 containerd[1459]: time="2026-01-17T00:16:48.839308587Z" level=info msg="TearDown network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" successfully" Jan 17 00:16:48.839870 containerd[1459]: time="2026-01-17T00:16:48.839343792Z" level=info msg="StopPodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" returns successfully" Jan 17 00:16:48.840565 containerd[1459]: time="2026-01-17T00:16:48.840521890Z" level=info msg="RemovePodSandbox for \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\"" Jan 17 00:16:48.840565 containerd[1459]: time="2026-01-17T00:16:48.840568762Z" level=info msg="Forcibly stopping sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\"" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.886 [WARNING][5012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"53c5293b-6a33-4d3c-b982-707b2d5a0fd8", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"6aa6e9c6fdb9968a079654370eec254712adbe2ff8cdf8c6655eb5d941b635ca", Pod:"calico-apiserver-85bf985ffc-kdf8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali727f4791778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.886 [INFO][5012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.886 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" iface="eth0" netns="" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.886 [INFO][5012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.886 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.920 [INFO][5019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.920 [INFO][5019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.920 [INFO][5019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.929 [WARNING][5019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.929 [INFO][5019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" HandleID="k8s-pod-network.1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--kdf8q-eth0" Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.931 [INFO][5019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.935915 containerd[1459]: 2026-01-17 00:16:48.932 [INFO][5012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306" Jan 17 00:16:48.935915 containerd[1459]: time="2026-01-17T00:16:48.934496334Z" level=info msg="TearDown network for sandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" successfully" Jan 17 00:16:48.939968 containerd[1459]: time="2026-01-17T00:16:48.939924738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:48.940176 containerd[1459]: time="2026-01-17T00:16:48.940131009Z" level=info msg="RemovePodSandbox \"1842c9d1d44d8314649a786cec1141e4ed342608a779df7e337d7e36e3e35306\" returns successfully" Jan 17 00:16:48.940766 containerd[1459]: time="2026-01-17T00:16:48.940730660Z" level=info msg="StopPodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\"" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:48.986 [WARNING][5033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b", Pod:"csi-node-driver-49lv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9d1cde29b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:48.986 [INFO][5033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:48.986 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" iface="eth0" netns="" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:48.987 [INFO][5033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:48.987 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.015 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.015 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.015 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.024 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.024 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.026 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.029968 containerd[1459]: 2026-01-17 00:16:49.028 [INFO][5033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.030759 containerd[1459]: time="2026-01-17T00:16:49.030023148Z" level=info msg="TearDown network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" successfully" Jan 17 00:16:49.030759 containerd[1459]: time="2026-01-17T00:16:49.030080251Z" level=info msg="StopPodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" returns successfully" Jan 17 00:16:49.030868 containerd[1459]: time="2026-01-17T00:16:49.030761497Z" level=info msg="RemovePodSandbox for \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\"" Jan 17 00:16:49.030868 containerd[1459]: time="2026-01-17T00:16:49.030803276Z" level=info msg="Forcibly stopping sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\"" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.079 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"4b1e7b055cb23acfa430fcfa6e98300fa9cea29e1ad9c2cce2453364108e054b", Pod:"csi-node-driver-49lv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9d1cde29b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.080 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.080 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" iface="eth0" netns="" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.080 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.080 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.113 [INFO][5061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.113 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.113 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.123 [WARNING][5061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.123 [INFO][5061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" HandleID="k8s-pod-network.f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-csi--node--driver--49lv6-eth0" Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.125 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.128821 containerd[1459]: 2026-01-17 00:16:49.127 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe" Jan 17 00:16:49.129968 containerd[1459]: time="2026-01-17T00:16:49.128868578Z" level=info msg="TearDown network for sandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" successfully" Jan 17 00:16:49.133876 containerd[1459]: time="2026-01-17T00:16:49.133815970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:49.134056 containerd[1459]: time="2026-01-17T00:16:49.133896291Z" level=info msg="RemovePodSandbox \"f9a425c020edae54e7c57c750bd94ee1f45f627fa925aadad16ed5d4d0839cfe\" returns successfully" Jan 17 00:16:49.134589 containerd[1459]: time="2026-01-17T00:16:49.134498535Z" level=info msg="StopPodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\"" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.178 [WARNING][5075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2ecb4038-1a10-453e-a0f6-362231e5785b", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190", Pod:"coredns-66bc5c9577-nfdnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dce2411e25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.178 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.179 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" iface="eth0" netns="" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.179 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.179 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.210 [INFO][5083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.211 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.211 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.219 [WARNING][5083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.220 [INFO][5083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.223 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.226618 containerd[1459]: 2026-01-17 00:16:49.224 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.226618 containerd[1459]: time="2026-01-17T00:16:49.226590953Z" level=info msg="TearDown network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" successfully" Jan 17 00:16:49.229646 containerd[1459]: time="2026-01-17T00:16:49.226628065Z" level=info msg="StopPodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" returns successfully" Jan 17 00:16:49.231033 containerd[1459]: time="2026-01-17T00:16:49.230992365Z" level=info msg="RemovePodSandbox for \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\"" Jan 17 00:16:49.231170 containerd[1459]: time="2026-01-17T00:16:49.231039303Z" level=info msg="Forcibly stopping sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\"" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.274 [WARNING][5097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2ecb4038-1a10-453e-a0f6-362231e5785b", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"7f834811de77d7ee1bc912993e33715e8eb73ed4fd005ea6cc672bf46f93a190", Pod:"coredns-66bc5c9577-nfdnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dce2411e25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.274 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.274 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" iface="eth0" netns="" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.274 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.274 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.303 [INFO][5104] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.303 [INFO][5104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.303 [INFO][5104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.311 [WARNING][5104] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.311 [INFO][5104] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" HandleID="k8s-pod-network.0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--nfdnh-eth0" Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.313 [INFO][5104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.316756 containerd[1459]: 2026-01-17 00:16:49.315 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716" Jan 17 00:16:49.317591 containerd[1459]: time="2026-01-17T00:16:49.316793631Z" level=info msg="TearDown network for sandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" successfully" Jan 17 00:16:49.321335 containerd[1459]: time="2026-01-17T00:16:49.321281059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:49.321460 containerd[1459]: time="2026-01-17T00:16:49.321357614Z" level=info msg="RemovePodSandbox \"0c369db9a04c54b7935fe4361c95fee44edc8b4b0afade7d76782973061b1716\" returns successfully" Jan 17 00:16:49.321997 containerd[1459]: time="2026-01-17T00:16:49.321960990Z" level=info msg="StopPodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\"" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.366 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e768df9c-0c67-442b-b814-3828e727eb5c", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7", Pod:"calico-apiserver-85bf985ffc-rd5bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic716dd89219", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.366 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.366 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" iface="eth0" netns="" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.366 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.366 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.407 [INFO][5126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.407 [INFO][5126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.407 [INFO][5126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.417 [WARNING][5126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.417 [INFO][5126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.419 [INFO][5126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.422315 containerd[1459]: 2026-01-17 00:16:49.420 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.426032 containerd[1459]: time="2026-01-17T00:16:49.422350520Z" level=info msg="TearDown network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" successfully" Jan 17 00:16:49.426032 containerd[1459]: time="2026-01-17T00:16:49.422384171Z" level=info msg="StopPodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" returns successfully" Jan 17 00:16:49.426032 containerd[1459]: time="2026-01-17T00:16:49.423856510Z" level=info msg="RemovePodSandbox for \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\"" Jan 17 00:16:49.426032 containerd[1459]: time="2026-01-17T00:16:49.424098296Z" level=info msg="Forcibly stopping sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\"" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.472 [WARNING][5141] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0", GenerateName:"calico-apiserver-85bf985ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e768df9c-0c67-442b-b814-3828e727eb5c", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bf985ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"e5173b02d2c095cd8731b7219150ce8634884ccec4a751a9530b9102969008f7", Pod:"calico-apiserver-85bf985ffc-rd5bl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic716dd89219", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.472 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.472 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" iface="eth0" netns="" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.472 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.472 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.500 [INFO][5148] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.501 [INFO][5148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.501 [INFO][5148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.509 [WARNING][5148] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.509 [INFO][5148] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" HandleID="k8s-pod-network.ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-calico--apiserver--85bf985ffc--rd5bl-eth0" Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.511 [INFO][5148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.514947 containerd[1459]: 2026-01-17 00:16:49.513 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9" Jan 17 00:16:49.514947 containerd[1459]: time="2026-01-17T00:16:49.514917151Z" level=info msg="TearDown network for sandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" successfully" Jan 17 00:16:49.520103 containerd[1459]: time="2026-01-17T00:16:49.519976154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:49.520284 containerd[1459]: time="2026-01-17T00:16:49.520248419Z" level=info msg="RemovePodSandbox \"ad34f909882329e936a693c8ffd4e93e889f2b702eb4f41e93ccdafe4b977bd9\" returns successfully" Jan 17 00:16:49.520972 containerd[1459]: time="2026-01-17T00:16:49.520941040Z" level=info msg="StopPodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\"" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.566 [WARNING][5162] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d801c454-90d8-47bb-9464-b452b91cd3db", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5", Pod:"coredns-66bc5c9577-qwwf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif201c6917a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.567 [INFO][5162] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.567 [INFO][5162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" iface="eth0" netns="" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.567 [INFO][5162] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.567 [INFO][5162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.593 [INFO][5169] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.593 [INFO][5169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.593 [INFO][5169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.601 [WARNING][5169] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.601 [INFO][5169] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.603 [INFO][5169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.607418 containerd[1459]: 2026-01-17 00:16:49.605 [INFO][5162] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.608413 containerd[1459]: time="2026-01-17T00:16:49.607487942Z" level=info msg="TearDown network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" successfully" Jan 17 00:16:49.608413 containerd[1459]: time="2026-01-17T00:16:49.607545273Z" level=info msg="StopPodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" returns successfully" Jan 17 00:16:49.608765 containerd[1459]: time="2026-01-17T00:16:49.608717502Z" level=info msg="RemovePodSandbox for \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\"" Jan 17 00:16:49.608765 containerd[1459]: time="2026-01-17T00:16:49.608759732Z" level=info msg="Forcibly stopping sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\"" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.654 [WARNING][5184] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d801c454-90d8-47bb-9464-b452b91cd3db", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260116-2100-b68dcad0d0abe1b56ce3", ContainerID:"f6909e2332f71445470c212837857b63652f8cb2b5b516b22800826a261545a5", Pod:"coredns-66bc5c9577-qwwf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif201c6917a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.654 [INFO][5184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.654 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" iface="eth0" netns="" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.654 [INFO][5184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.654 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.689 [INFO][5191] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.690 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.690 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.700 [WARNING][5191] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.700 [INFO][5191] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" HandleID="k8s-pod-network.a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-coredns--66bc5c9577--qwwf7-eth0" Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.702 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.705222 containerd[1459]: 2026-01-17 00:16:49.703 [INFO][5184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549" Jan 17 00:16:49.706304 containerd[1459]: time="2026-01-17T00:16:49.706255970Z" level=info msg="TearDown network for sandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" successfully" Jan 17 00:16:49.711303 containerd[1459]: time="2026-01-17T00:16:49.711253170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:49.711424 containerd[1459]: time="2026-01-17T00:16:49.711328029Z" level=info msg="RemovePodSandbox \"a7fea9ef44de67fd09265f749e0f7c46b62c89ac6e61ae9457828fe509634549\" returns successfully" Jan 17 00:16:49.712090 containerd[1459]: time="2026-01-17T00:16:49.712018557Z" level=info msg="StopPodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\"" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.759 [WARNING][5205] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.759 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.759 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" iface="eth0" netns="" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.759 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.759 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.788 [INFO][5212] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.788 [INFO][5212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.789 [INFO][5212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.800 [WARNING][5212] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.800 [INFO][5212] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.804 [INFO][5212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.815308 containerd[1459]: 2026-01-17 00:16:49.811 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.815308 containerd[1459]: time="2026-01-17T00:16:49.815243081Z" level=info msg="TearDown network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" successfully" Jan 17 00:16:49.815308 containerd[1459]: time="2026-01-17T00:16:49.815282598Z" level=info msg="StopPodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" returns successfully" Jan 17 00:16:49.816755 containerd[1459]: time="2026-01-17T00:16:49.816110727Z" level=info msg="RemovePodSandbox for \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\"" Jan 17 00:16:49.816755 containerd[1459]: time="2026-01-17T00:16:49.816159266Z" level=info msg="Forcibly stopping sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\"" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.866 [WARNING][5226] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" WorkloadEndpoint="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.866 [INFO][5226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.866 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" iface="eth0" netns="" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.866 [INFO][5226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.866 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.893 [INFO][5233] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.893 [INFO][5233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.894 [INFO][5233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.904 [WARNING][5233] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.904 [INFO][5233] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" HandleID="k8s-pod-network.300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Workload="ci--4081--3--6--nightly--20260116--2100--b68dcad0d0abe1b56ce3-k8s-whisker--7f7864c4bd--h6vsc-eth0" Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.906 [INFO][5233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.909241 containerd[1459]: 2026-01-17 00:16:49.907 [INFO][5226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a" Jan 17 00:16:49.909952 containerd[1459]: time="2026-01-17T00:16:49.909317207Z" level=info msg="TearDown network for sandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" successfully" Jan 17 00:16:49.914133 containerd[1459]: time="2026-01-17T00:16:49.913638028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:49.914133 containerd[1459]: time="2026-01-17T00:16:49.913724716Z" level=info msg="RemovePodSandbox \"300a304f8f0080600367bd3edaa26f95dfd74e01a05be97283a0769da3643a3a\" returns successfully" Jan 17 00:16:50.349698 containerd[1459]: time="2026-01-17T00:16:50.349609875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:16:50.505649 containerd[1459]: time="2026-01-17T00:16:50.505591309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:50.507140 containerd[1459]: time="2026-01-17T00:16:50.507077193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:16:50.507306 containerd[1459]: time="2026-01-17T00:16:50.507174979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:16:50.507412 kubelet[2578]: E0117 00:16:50.507363 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:50.507892 kubelet[2578]: E0117 00:16:50.507424 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:50.507892 kubelet[2578]: E0117 00:16:50.507536 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:50.510288 containerd[1459]: time="2026-01-17T00:16:50.509747203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:16:50.674679 containerd[1459]: time="2026-01-17T00:16:50.674503816Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:50.676116 containerd[1459]: time="2026-01-17T00:16:50.676034155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:16:50.676322 containerd[1459]: time="2026-01-17T00:16:50.676082502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:16:50.676398 kubelet[2578]: E0117 00:16:50.676322 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:50.676483 kubelet[2578]: E0117 00:16:50.676402 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:50.676542 kubelet[2578]: E0117 00:16:50.676503 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:50.676663 kubelet[2578]: E0117 00:16:50.676568 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:16:51.348900 containerd[1459]: time="2026-01-17T00:16:51.348233357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:51.506065 containerd[1459]: time="2026-01-17T00:16:51.505979971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:51.507435 containerd[1459]: time="2026-01-17T00:16:51.507367555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:51.507595 containerd[1459]: time="2026-01-17T00:16:51.507480010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:51.507943 kubelet[2578]: E0117 00:16:51.507854 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:51.507943 kubelet[2578]: E0117 00:16:51.507915 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:51.508501 kubelet[2578]: E0117 00:16:51.508217 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-kdf8q_calico-apiserver(53c5293b-6a33-4d3c-b982-707b2d5a0fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:51.508501 kubelet[2578]: E0117 00:16:51.508270 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:16:51.509728 containerd[1459]: time="2026-01-17T00:16:51.509404039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:51.665419 containerd[1459]: time="2026-01-17T00:16:51.665245784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:51.666949 containerd[1459]: time="2026-01-17T00:16:51.666825964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:51.666949 containerd[1459]: time="2026-01-17T00:16:51.666877980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:51.667193 kubelet[2578]: E0117 00:16:51.667113 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:51.667193 kubelet[2578]: E0117 00:16:51.667172 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:51.667376 kubelet[2578]: E0117 00:16:51.667278 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-rd5bl_calico-apiserver(e768df9c-0c67-442b-b814-3828e727eb5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:51.667376 kubelet[2578]: E0117 00:16:51.667341 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:16:53.348431 containerd[1459]: time="2026-01-17T00:16:53.348168144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:16:53.513700 containerd[1459]: time="2026-01-17T00:16:53.513649087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:53.515165 containerd[1459]: time="2026-01-17T00:16:53.515031522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:16:53.515165 containerd[1459]: time="2026-01-17T00:16:53.515097678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:53.515384 kubelet[2578]: E0117 00:16:53.515291 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:53.515384 kubelet[2578]: E0117 00:16:53.515341 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:53.515928 kubelet[2578]: E0117 00:16:53.515432 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b8d4cfc64-6pg6z_calico-system(4dd5910c-d46c-4829-af81-73c3a3c07bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:53.515928 kubelet[2578]: E0117 00:16:53.515480 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:16:56.350653 containerd[1459]: time="2026-01-17T00:16:56.350581619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:16:56.594839 containerd[1459]: time="2026-01-17T00:16:56.594771461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:56.596549 containerd[1459]: time="2026-01-17T00:16:56.596406133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:16:56.596549 containerd[1459]: time="2026-01-17T00:16:56.596463234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:56.596796 kubelet[2578]: E0117 00:16:56.596744 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:56.597395 kubelet[2578]: E0117 00:16:56.596806 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:56.597395 kubelet[2578]: E0117 00:16:56.596905 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zts27_calico-system(ed571de0-820f-44a5-8d65-cc57b2d7af22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:56.597395 kubelet[2578]: E0117 00:16:56.596956 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:17:00.351235 kubelet[2578]: E0117 00:17:00.350520 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:17:02.350497 kubelet[2578]: E0117 00:17:02.349160 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:17:02.353469 kubelet[2578]: E0117 00:17:02.352474 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:17:06.349521 kubelet[2578]: E0117 00:17:06.349454 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:17:06.352361 kubelet[2578]: E0117 00:17:06.350463 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:17:07.352092 kubelet[2578]: E0117 00:17:07.349684 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:17:13.414665 systemd[1]: Started sshd@9-10.128.0.91:22-4.153.228.146:56294.service - OpenSSH per-connection server daemon (4.153.228.146:56294). Jan 17 00:17:13.662595 sshd[5283]: Accepted publickey for core from 4.153.228.146 port 56294 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:13.666021 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:13.678097 systemd-logind[1449]: New session 10 of user core. Jan 17 00:17:13.683274 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:17:13.938979 sshd[5283]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:13.950316 systemd[1]: sshd@9-10.128.0.91:22-4.153.228.146:56294.service: Deactivated successfully. Jan 17 00:17:13.955145 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:17:13.956901 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:17:13.959359 systemd-logind[1449]: Removed session 10. Jan 17 00:17:15.349619 containerd[1459]: time="2026-01-17T00:17:15.349559875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:17:15.506727 containerd[1459]: time="2026-01-17T00:17:15.506431899Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:15.508960 containerd[1459]: time="2026-01-17T00:17:15.508022107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:17:15.508960 containerd[1459]: time="2026-01-17T00:17:15.508166440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:17:15.509205 kubelet[2578]: E0117 00:17:15.508364 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:15.509205 kubelet[2578]: E0117 00:17:15.508420 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:15.509205 kubelet[2578]: E0117 00:17:15.508517 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:15.512404 containerd[1459]: time="2026-01-17T00:17:15.510749079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:17:15.675227 containerd[1459]: time="2026-01-17T00:17:15.675083501Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:15.677070 containerd[1459]: time="2026-01-17T00:17:15.676589043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:17:15.677070 containerd[1459]: time="2026-01-17T00:17:15.676716010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:15.677273 kubelet[2578]: E0117 00:17:15.677083 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:15.677273 kubelet[2578]: E0117 00:17:15.677139 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:15.677273 kubelet[2578]: E0117 00:17:15.677232 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:15.677469 kubelet[2578]: E0117 00:17:15.677292 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:17:16.352266 containerd[1459]: time="2026-01-17T00:17:16.352189923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:17:16.548291 containerd[1459]: time="2026-01-17T00:17:16.548069825Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:16.550449 containerd[1459]: time="2026-01-17T00:17:16.550236397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:17:16.550449 containerd[1459]: time="2026-01-17T00:17:16.550296493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:17:16.552549 kubelet[2578]: E0117 00:17:16.550824 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:16.552549 kubelet[2578]: E0117 00:17:16.550878 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:16.552549 kubelet[2578]: E0117 00:17:16.551139 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:16.554305 containerd[1459]: time="2026-01-17T00:17:16.554096323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:16.725115 containerd[1459]: time="2026-01-17T00:17:16.724143670Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:16.728293 containerd[1459]: time="2026-01-17T00:17:16.727939057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:16.728293 containerd[1459]: time="2026-01-17T00:17:16.728083308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:16.728492 kubelet[2578]: E0117 00:17:16.728318 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:16.728492 kubelet[2578]: E0117 00:17:16.728375 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:16.728726 kubelet[2578]: E0117 00:17:16.728674 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-kdf8q_calico-apiserver(53c5293b-6a33-4d3c-b982-707b2d5a0fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:16.728823 kubelet[2578]: E0117 00:17:16.728734 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:17:16.731344 containerd[1459]: time="2026-01-17T00:17:16.731305890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:17:16.899720 containerd[1459]: time="2026-01-17T00:17:16.899660545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:16.901293 containerd[1459]: time="2026-01-17T00:17:16.901226453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:17:16.901447 containerd[1459]: time="2026-01-17T00:17:16.901360285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:17:16.901723 kubelet[2578]: E0117 00:17:16.901649 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:16.901931 kubelet[2578]: E0117 00:17:16.901732 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:16.902367 kubelet[2578]: E0117 00:17:16.901892 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:16.902634 kubelet[2578]: E0117 00:17:16.902571 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:17:18.993174 systemd[1]: Started sshd@10-10.128.0.91:22-4.153.228.146:34930.service - OpenSSH per-connection server daemon (4.153.228.146:34930). Jan 17 00:17:19.229166 sshd[5302]: Accepted publickey for core from 4.153.228.146 port 34930 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:19.231519 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:19.242953 systemd-logind[1449]: New session 11 of user core. Jan 17 00:17:19.249431 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:17:19.349343 containerd[1459]: time="2026-01-17T00:17:19.349292667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:17:19.524313 containerd[1459]: time="2026-01-17T00:17:19.523744479Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:19.528072 containerd[1459]: time="2026-01-17T00:17:19.525680763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:19.528072 containerd[1459]: time="2026-01-17T00:17:19.525537036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:17:19.528284 kubelet[2578]: E0117 00:17:19.526193 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:19.528284 kubelet[2578]: E0117 00:17:19.526249 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:19.528284 kubelet[2578]: E0117 00:17:19.526342 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b8d4cfc64-6pg6z_calico-system(4dd5910c-d46c-4829-af81-73c3a3c07bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:19.528284 kubelet[2578]: E0117 00:17:19.527427 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:17:19.564404 sshd[5302]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:19.573232 systemd[1]: sshd@10-10.128.0.91:22-4.153.228.146:34930.service: Deactivated successfully. Jan 17 00:17:19.578276 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:17:19.585115 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:17:19.586694 systemd-logind[1449]: Removed session 11. Jan 17 00:17:20.348843 containerd[1459]: time="2026-01-17T00:17:20.348742083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:20.525003 containerd[1459]: time="2026-01-17T00:17:20.524765498Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:20.526502 containerd[1459]: time="2026-01-17T00:17:20.526325400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:20.526502 containerd[1459]: time="2026-01-17T00:17:20.526438663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:20.528091 kubelet[2578]: E0117 00:17:20.526853 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:20.528091 kubelet[2578]: E0117 00:17:20.526907 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:20.528091 kubelet[2578]: E0117 00:17:20.527000 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-rd5bl_calico-apiserver(e768df9c-0c67-442b-b814-3828e727eb5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:20.528091 kubelet[2578]: E0117 00:17:20.527064 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:17:21.349854 containerd[1459]: time="2026-01-17T00:17:21.349733629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:17:21.509115 containerd[1459]: time="2026-01-17T00:17:21.508845041Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:21.510487 containerd[1459]: time="2026-01-17T00:17:21.510300958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:17:21.510487 containerd[1459]: time="2026-01-17T00:17:21.510421028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:21.510675 kubelet[2578]: E0117 00:17:21.510622 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:21.510813 kubelet[2578]: E0117 00:17:21.510682 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:21.510876 kubelet[2578]: E0117 00:17:21.510855 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zts27_calico-system(ed571de0-820f-44a5-8d65-cc57b2d7af22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:21.510935 kubelet[2578]: E0117 00:17:21.510903 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:17:24.617237 systemd[1]: Started sshd@11-10.128.0.91:22-4.153.228.146:40150.service - OpenSSH per-connection server daemon (4.153.228.146:40150). Jan 17 00:17:24.849084 sshd[5316]: Accepted publickey for core from 4.153.228.146 port 40150 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:24.851216 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:24.858678 systemd-logind[1449]: New session 12 of user core. Jan 17 00:17:24.866569 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:17:25.136372 sshd[5316]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:25.146558 systemd[1]: sshd@11-10.128.0.91:22-4.153.228.146:40150.service: Deactivated successfully. Jan 17 00:17:25.152324 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:17:25.154887 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:17:25.156689 systemd-logind[1449]: Removed session 12. Jan 17 00:17:25.186398 systemd[1]: Started sshd@12-10.128.0.91:22-4.153.228.146:40152.service - OpenSSH per-connection server daemon (4.153.228.146:40152). Jan 17 00:17:25.423003 sshd[5330]: Accepted publickey for core from 4.153.228.146 port 40152 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:25.426126 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:25.433245 systemd-logind[1449]: New session 13 of user core. Jan 17 00:17:25.442915 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:17:25.760547 sshd[5330]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:25.770610 systemd[1]: sshd@12-10.128.0.91:22-4.153.228.146:40152.service: Deactivated successfully. Jan 17 00:17:25.776588 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:17:25.779236 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:17:25.783486 systemd-logind[1449]: Removed session 13. Jan 17 00:17:25.807468 systemd[1]: Started sshd@13-10.128.0.91:22-4.153.228.146:40156.service - OpenSSH per-connection server daemon (4.153.228.146:40156). Jan 17 00:17:26.056151 sshd[5341]: Accepted publickey for core from 4.153.228.146 port 40156 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:26.059951 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:26.071486 systemd-logind[1449]: New session 14 of user core. Jan 17 00:17:26.077280 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:17:26.353377 sshd[5341]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:26.364132 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:17:26.364506 systemd[1]: sshd@13-10.128.0.91:22-4.153.228.146:40156.service: Deactivated successfully. Jan 17 00:17:26.370425 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:17:26.378777 systemd-logind[1449]: Removed session 14. Jan 17 00:17:28.355594 kubelet[2578]: E0117 00:17:28.355303 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:17:30.352594 kubelet[2578]: E0117 00:17:30.352418 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:17:31.349390 kubelet[2578]: E0117 00:17:31.349010 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:17:31.349390 kubelet[2578]: E0117 00:17:31.349088 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:17:31.404177 systemd[1]: Started sshd@14-10.128.0.91:22-4.153.228.146:40170.service - OpenSSH per-connection server daemon (4.153.228.146:40170). Jan 17 00:17:31.648095 sshd[5356]: Accepted publickey for core from 4.153.228.146 port 40170 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:31.648290 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:31.662024 systemd-logind[1449]: New session 15 of user core. Jan 17 00:17:31.669246 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:17:31.761956 systemd[1]: run-containerd-runc-k8s.io-bc3e1e4c7fe9c6abb5e7673bf3c3c2902d2b7092bb5b689d1c2a37a582d8d0ca-runc.Uylk4f.mount: Deactivated successfully. Jan 17 00:17:31.980656 sshd[5356]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:31.988024 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:17:31.989658 systemd[1]: sshd@14-10.128.0.91:22-4.153.228.146:40170.service: Deactivated successfully. Jan 17 00:17:31.994516 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:17:31.997911 systemd-logind[1449]: Removed session 15. Jan 17 00:17:34.351834 kubelet[2578]: E0117 00:17:34.351416 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:17:36.353897 kubelet[2578]: E0117 00:17:36.353833 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:17:37.030129 systemd[1]: Started sshd@15-10.128.0.91:22-4.153.228.146:53444.service - OpenSSH per-connection server daemon (4.153.228.146:53444). Jan 17 00:17:37.266445 sshd[5399]: Accepted publickey for core from 4.153.228.146 port 53444 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:37.268606 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:37.277096 systemd-logind[1449]: New session 16 of user core. Jan 17 00:17:37.284291 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:17:37.557400 sshd[5399]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:37.564956 systemd[1]: sshd@15-10.128.0.91:22-4.153.228.146:53444.service: Deactivated successfully. Jan 17 00:17:37.571003 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:17:37.576164 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:17:37.578510 systemd-logind[1449]: Removed session 16. Jan 17 00:17:39.350394 kubelet[2578]: E0117 00:17:39.350322 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:17:42.607406 systemd[1]: Started sshd@16-10.128.0.91:22-4.153.228.146:53446.service - OpenSSH per-connection server daemon (4.153.228.146:53446). Jan 17 00:17:42.869007 sshd[5411]: Accepted publickey for core from 4.153.228.146 port 53446 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:42.871006 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:42.881898 systemd-logind[1449]: New session 17 of user core. Jan 17 00:17:42.891841 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:17:43.210827 sshd[5411]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:43.220432 systemd[1]: sshd@16-10.128.0.91:22-4.153.228.146:53446.service: Deactivated successfully. Jan 17 00:17:43.225660 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:17:43.231022 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:17:43.233759 systemd-logind[1449]: Removed session 17. Jan 17 00:17:44.351746 kubelet[2578]: E0117 00:17:44.351671 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:17:44.355148 kubelet[2578]: E0117 00:17:44.354404 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:17:45.348757 kubelet[2578]: E0117 00:17:45.348649 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:17:48.266398 systemd[1]: Started sshd@17-10.128.0.91:22-4.153.228.146:42860.service - OpenSSH per-connection server daemon (4.153.228.146:42860). Jan 17 00:17:48.501574 sshd[5426]: Accepted publickey for core from 4.153.228.146 port 42860 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:48.502990 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:48.512535 systemd-logind[1449]: New session 18 of user core. Jan 17 00:17:48.519261 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:17:48.783914 sshd[5426]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:48.794873 systemd[1]: sshd@17-10.128.0.91:22-4.153.228.146:42860.service: Deactivated successfully. Jan 17 00:17:48.799737 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:17:48.801194 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:17:48.802966 systemd-logind[1449]: Removed session 18. Jan 17 00:17:48.825122 systemd[1]: Started sshd@18-10.128.0.91:22-4.153.228.146:42874.service - OpenSSH per-connection server daemon (4.153.228.146:42874). Jan 17 00:17:49.070139 sshd[5441]: Accepted publickey for core from 4.153.228.146 port 42874 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:49.071632 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:49.082338 systemd-logind[1449]: New session 19 of user core. Jan 17 00:17:49.087708 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:17:49.349882 kubelet[2578]: E0117 00:17:49.349733 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:17:49.502340 sshd[5441]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:49.507819 systemd[1]: sshd@18-10.128.0.91:22-4.153.228.146:42874.service: Deactivated successfully. Jan 17 00:17:49.514196 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:17:49.519280 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:17:49.522027 systemd-logind[1449]: Removed session 19. Jan 17 00:17:49.552438 systemd[1]: Started sshd@19-10.128.0.91:22-4.153.228.146:42888.service - OpenSSH per-connection server daemon (4.153.228.146:42888). Jan 17 00:17:49.786569 sshd[5452]: Accepted publickey for core from 4.153.228.146 port 42888 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:49.788950 sshd[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:49.797791 systemd-logind[1449]: New session 20 of user core. Jan 17 00:17:49.803261 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:17:50.353650 kubelet[2578]: E0117 00:17:50.350727 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:17:50.948188 sshd[5452]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:50.956028 systemd[1]: sshd@19-10.128.0.91:22-4.153.228.146:42888.service: Deactivated successfully. Jan 17 00:17:50.960057 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:17:50.965398 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:17:50.967719 systemd-logind[1449]: Removed session 20. Jan 17 00:17:50.993391 systemd[1]: Started sshd@20-10.128.0.91:22-4.153.228.146:42898.service - OpenSSH per-connection server daemon (4.153.228.146:42898). Jan 17 00:17:51.233741 sshd[5469]: Accepted publickey for core from 4.153.228.146 port 42898 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:51.234593 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:51.245297 systemd-logind[1449]: New session 21 of user core. Jan 17 00:17:51.251237 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:17:51.778358 sshd[5469]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:51.784734 systemd[1]: sshd@20-10.128.0.91:22-4.153.228.146:42898.service: Deactivated successfully. Jan 17 00:17:51.784973 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:17:51.792206 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:17:51.796999 systemd-logind[1449]: Removed session 21. Jan 17 00:17:51.826177 systemd[1]: Started sshd@21-10.128.0.91:22-4.153.228.146:42902.service - OpenSSH per-connection server daemon (4.153.228.146:42902). Jan 17 00:17:52.061374 sshd[5481]: Accepted publickey for core from 4.153.228.146 port 42902 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:52.064422 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:52.076998 systemd-logind[1449]: New session 22 of user core. Jan 17 00:17:52.082517 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:17:52.341900 sshd[5481]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:52.350702 systemd[1]: sshd@21-10.128.0.91:22-4.153.228.146:42902.service: Deactivated successfully. Jan 17 00:17:52.350929 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:17:52.357528 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:17:52.360729 kubelet[2578]: E0117 00:17:52.360648 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:17:52.363216 systemd-logind[1449]: Removed session 22. Jan 17 00:17:57.351077 containerd[1459]: time="2026-01-17T00:17:57.349753740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:17:57.388726 systemd[1]: Started sshd@22-10.128.0.91:22-4.153.228.146:48942.service - OpenSSH per-connection server daemon (4.153.228.146:48942). Jan 17 00:17:57.526888 containerd[1459]: time="2026-01-17T00:17:57.524644158Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:57.526888 containerd[1459]: time="2026-01-17T00:17:57.526357615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:17:57.526888 containerd[1459]: time="2026-01-17T00:17:57.526548433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:17:57.527330 kubelet[2578]: E0117 00:17:57.527083 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:57.527330 kubelet[2578]: E0117 00:17:57.527173 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:57.527940 kubelet[2578]: E0117 00:17:57.527449 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:57.529850 containerd[1459]: time="2026-01-17T00:17:57.529811089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:57.625452 sshd[5504]: Accepted publickey for core from 4.153.228.146 port 48942 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:17:57.627827 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:57.640301 systemd-logind[1449]: New session 23 of user core. Jan 17 00:17:57.646262 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:17:57.693470 containerd[1459]: time="2026-01-17T00:17:57.693209915Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:57.695009 containerd[1459]: time="2026-01-17T00:17:57.694759500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:57.695009 containerd[1459]: time="2026-01-17T00:17:57.694891468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:57.695308 kubelet[2578]: E0117 00:17:57.695255 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:57.695416 kubelet[2578]: E0117 00:17:57.695321 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:57.696038 kubelet[2578]: E0117 00:17:57.695610 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-kdf8q_calico-apiserver(53c5293b-6a33-4d3c-b982-707b2d5a0fd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:57.696038 kubelet[2578]: E0117 00:17:57.695683 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8" Jan 17 00:17:57.696424 containerd[1459]: time="2026-01-17T00:17:57.696371185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:17:57.869243 containerd[1459]: time="2026-01-17T00:17:57.869161356Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:57.873342 containerd[1459]: time="2026-01-17T00:17:57.873216868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:17:57.873342 containerd[1459]: time="2026-01-17T00:17:57.873306200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:57.873579 kubelet[2578]: E0117 00:17:57.873505 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:57.873579 kubelet[2578]: E0117 00:17:57.873565 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:57.873709 kubelet[2578]: E0117 00:17:57.873664 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555ccdcf74-z7wj5_calico-system(864265cf-310b-4383-972d-cec82b8024d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:57.875352 kubelet[2578]: E0117 00:17:57.875264 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:17:57.953788 sshd[5504]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:57.963875 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:17:57.965600 systemd[1]: sshd@22-10.128.0.91:22-4.153.228.146:48942.service: Deactivated successfully. Jan 17 00:17:57.971911 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:17:57.978441 systemd-logind[1449]: Removed session 23. Jan 17 00:17:59.350057 kubelet[2578]: E0117 00:17:59.349976 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:18:00.354772 containerd[1459]: time="2026-01-17T00:18:00.354695162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:18:00.546437 containerd[1459]: time="2026-01-17T00:18:00.546175855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:00.548021 containerd[1459]: time="2026-01-17T00:18:00.547758889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:18:00.548021 containerd[1459]: time="2026-01-17T00:18:00.547826820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:18:00.549190 kubelet[2578]: E0117 00:18:00.548538 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:18:00.549190 kubelet[2578]: E0117 00:18:00.548602 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:18:00.549190 kubelet[2578]: E0117 00:18:00.548704 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b8d4cfc64-6pg6z_calico-system(4dd5910c-d46c-4829-af81-73c3a3c07bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:00.549190 kubelet[2578]: E0117 00:18:00.548754 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b8d4cfc64-6pg6z" podUID="4dd5910c-d46c-4829-af81-73c3a3c07bf1" Jan 17 00:18:03.001534 systemd[1]: Started sshd@23-10.128.0.91:22-4.153.228.146:48958.service - OpenSSH per-connection server daemon (4.153.228.146:48958). Jan 17 00:18:03.233379 sshd[5540]: Accepted publickey for core from 4.153.228.146 port 48958 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:18:03.235032 sshd[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:03.243101 systemd-logind[1449]: New session 24 of user core. Jan 17 00:18:03.251260 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:18:03.575383 sshd[5540]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:03.584376 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:18:03.585255 systemd[1]: sshd@23-10.128.0.91:22-4.153.228.146:48958.service: Deactivated successfully. Jan 17 00:18:03.589996 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:18:03.595894 systemd-logind[1449]: Removed session 24. Jan 17 00:18:04.350433 containerd[1459]: time="2026-01-17T00:18:04.350383671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:18:04.521302 containerd[1459]: time="2026-01-17T00:18:04.521234742Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:04.522936 containerd[1459]: time="2026-01-17T00:18:04.522847244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:18:04.523160 containerd[1459]: time="2026-01-17T00:18:04.522908169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:04.523339 kubelet[2578]: E0117 00:18:04.523235 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:18:04.523339 kubelet[2578]: E0117 00:18:04.523296 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:18:04.525256 kubelet[2578]: E0117 00:18:04.523399 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zts27_calico-system(ed571de0-820f-44a5-8d65-cc57b2d7af22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:04.525256 kubelet[2578]: E0117 00:18:04.523447 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zts27" podUID="ed571de0-820f-44a5-8d65-cc57b2d7af22" Jan 17 00:18:05.350064 containerd[1459]: time="2026-01-17T00:18:05.349655196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:18:05.510064 containerd[1459]: time="2026-01-17T00:18:05.509760029Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:05.513078 containerd[1459]: time="2026-01-17T00:18:05.511311439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:18:05.513078 containerd[1459]: time="2026-01-17T00:18:05.511420566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:18:05.513293 kubelet[2578]: E0117 00:18:05.511668 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:18:05.513293 kubelet[2578]: E0117 00:18:05.511723 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:18:05.513293 kubelet[2578]: E0117 00:18:05.511813 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:05.515694 containerd[1459]: time="2026-01-17T00:18:05.515661832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:18:05.684375 containerd[1459]: time="2026-01-17T00:18:05.683996090Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:05.685931 containerd[1459]: time="2026-01-17T00:18:05.685737812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:18:05.685931 containerd[1459]: time="2026-01-17T00:18:05.685864372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:18:05.686952 kubelet[2578]: E0117 00:18:05.686347 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:18:05.686952 kubelet[2578]: E0117 00:18:05.686410 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:18:05.686952 kubelet[2578]: E0117 00:18:05.686506 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-49lv6_calico-system(0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:05.687665 kubelet[2578]: E0117 00:18:05.686568 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49lv6" podUID="0c0f8dfe-d5e5-4727-b5d0-0b1225ee5624" Jan 17 00:18:08.623915 systemd[1]: Started sshd@24-10.128.0.91:22-4.153.228.146:49928.service - OpenSSH per-connection server daemon (4.153.228.146:49928). Jan 17 00:18:08.874556 sshd[5573]: Accepted publickey for core from 4.153.228.146 port 49928 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:18:08.875531 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:08.883650 systemd-logind[1449]: New session 25 of user core. Jan 17 00:18:08.893229 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:18:09.166377 sshd[5573]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:09.173413 systemd[1]: sshd@24-10.128.0.91:22-4.153.228.146:49928.service: Deactivated successfully. Jan 17 00:18:09.177939 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:18:09.179456 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:18:09.182739 systemd-logind[1449]: Removed session 25. Jan 17 00:18:09.349812 kubelet[2578]: E0117 00:18:09.349750 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555ccdcf74-z7wj5" podUID="864265cf-310b-4383-972d-cec82b8024d4" Jan 17 00:18:10.350375 containerd[1459]: time="2026-01-17T00:18:10.349602081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:18:10.517067 containerd[1459]: time="2026-01-17T00:18:10.516710863Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:18:10.519887 containerd[1459]: time="2026-01-17T00:18:10.519693682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:18:10.519887 containerd[1459]: time="2026-01-17T00:18:10.519840461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:18:10.520913 kubelet[2578]: E0117 00:18:10.520261 2578 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:10.520913 kubelet[2578]: E0117 00:18:10.520317 2578 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:18:10.520913 kubelet[2578]: E0117 00:18:10.520412 2578 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85bf985ffc-rd5bl_calico-apiserver(e768df9c-0c67-442b-b814-3828e727eb5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:18:10.520913 kubelet[2578]: E0117 00:18:10.520460 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-rd5bl" podUID="e768df9c-0c67-442b-b814-3828e727eb5c" Jan 17 00:18:11.350622 kubelet[2578]: E0117 00:18:11.350555 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85bf985ffc-kdf8q" podUID="53c5293b-6a33-4d3c-b982-707b2d5a0fd8"