Jan 24 00:44:14.088965 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:44:14.089011 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:14.089030 kernel: BIOS-provided physical RAM map: Jan 24 00:44:14.089044 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 24 00:44:14.089058 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 24 00:44:14.089072 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 24 00:44:14.089090 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 24 00:44:14.089108 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 24 00:44:14.089122 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 24 00:44:14.089137 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 24 00:44:14.089152 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 24 00:44:14.089167 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 24 00:44:14.089182 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 24 00:44:14.089197 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 24 00:44:14.089220 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 24 00:44:14.089236 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 24 00:44:14.089252 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 24 00:44:14.089268 kernel: NX (Execute Disable) protection: active Jan 24 00:44:14.089283 kernel: APIC: Static calls initialized Jan 24 00:44:14.089299 kernel: efi: EFI v2.7 by EDK II Jan 24 00:44:14.089339 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 24 00:44:14.089354 kernel: SMBIOS 2.4 present. Jan 24 00:44:14.089371 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 24 00:44:14.089387 kernel: Hypervisor detected: KVM Jan 24 00:44:14.089408 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:44:14.089425 kernel: kvm-clock: using sched offset of 13338932652 cycles Jan 24 00:44:14.089442 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:44:14.089459 kernel: tsc: Detected 2299.998 MHz processor Jan 24 00:44:14.089475 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:44:14.089493 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:44:14.089510 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 24 00:44:14.089528 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 24 00:44:14.089545 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:44:14.089566 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 24 00:44:14.089582 kernel: Using GB pages for direct mapping Jan 24 00:44:14.089606 kernel: Secure boot disabled Jan 24 00:44:14.089623 kernel: ACPI: Early table checksum verification disabled Jan 24 00:44:14.089639 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 24 00:44:14.089657 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 24 00:44:14.089675 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 24 00:44:14.089700 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 24 00:44:14.089722 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 24 00:44:14.089740 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 24 00:44:14.089759 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 24 00:44:14.089778 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 24 00:44:14.089796 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 24 00:44:14.089814 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 24 00:44:14.089837 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 24 00:44:14.089855 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 24 00:44:14.089874 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 24 00:44:14.089892 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 24 00:44:14.089911 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 24 00:44:14.089929 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 24 00:44:14.089947 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 24 00:44:14.089965 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 24 00:44:14.089983 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 24 00:44:14.090005 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 24 00:44:14.090023 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:44:14.090042 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:44:14.090060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 24 00:44:14.090078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 24 00:44:14.090096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 24 00:44:14.090115 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 24 00:44:14.090133 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 24 00:44:14.090151 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 24 00:44:14.090174 kernel: Zone ranges: Jan 24 00:44:14.090193 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:44:14.090211 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:44:14.090230 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 24 00:44:14.090248 kernel: Movable zone start for each node Jan 24 00:44:14.090266 kernel: Early memory node ranges Jan 24 00:44:14.090285 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 24 00:44:14.090303 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 24 00:44:14.090346 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 24 00:44:14.090369 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 24 00:44:14.090387 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 24 00:44:14.090405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 24 00:44:14.090424 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:44:14.090442 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 24 00:44:14.090460 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 24 00:44:14.090479 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:44:14.090496 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 24 00:44:14.090515 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:44:14.090533 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:44:14.090556 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:44:14.090575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:44:14.090593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:44:14.090617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:44:14.090636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:44:14.090654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:44:14.090673 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:44:14.090691 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 24 00:44:14.090713 kernel: Booting paravirtualized kernel on KVM Jan 24 00:44:14.090732 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:44:14.090751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:44:14.090769 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:44:14.090788 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:44:14.090806 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:44:14.090824 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:44:14.090842 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:44:14.090862 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:14.090886 kernel: random: crng init done Jan 24 00:44:14.090903 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:44:14.090922 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:44:14.090941 kernel: Fallback order for Node 0: 0 Jan 24 00:44:14.090958 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 24 00:44:14.090976 kernel: Policy zone: Normal Jan 24 00:44:14.090995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:44:14.091013 kernel: software IO TLB: area num 2. Jan 24 00:44:14.091032 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347148K reserved, 0K cma-reserved) Jan 24 00:44:14.091056 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:44:14.091073 kernel: Kernel/User page tables isolation: enabled Jan 24 00:44:14.091092 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:44:14.091110 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:44:14.091128 kernel: Dynamic Preempt: voluntary Jan 24 00:44:14.091147 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:44:14.091167 kernel: rcu: RCU event tracing is enabled. Jan 24 00:44:14.091185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:44:14.091223 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:44:14.091248 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:44:14.091268 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:44:14.091291 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:44:14.091329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:44:14.091346 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:44:14.091363 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:44:14.091383 kernel: Console: colour dummy device 80x25 Jan 24 00:44:14.091405 kernel: printk: console [ttyS0] enabled Jan 24 00:44:14.091423 kernel: ACPI: Core revision 20230628 Jan 24 00:44:14.091441 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:44:14.091458 kernel: x2apic enabled Jan 24 00:44:14.091476 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:44:14.091492 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 24 00:44:14.091512 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 24 00:44:14.091530 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 24 00:44:14.091548 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 24 00:44:14.091573 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 24 00:44:14.091593 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:44:14.091620 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 24 00:44:14.091639 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 24 00:44:14.091659 kernel: Spectre V2 : Mitigation: IBRS Jan 24 00:44:14.091678 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:44:14.091698 kernel: RETBleed: Mitigation: IBRS Jan 24 00:44:14.091714 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:44:14.091732 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 24 00:44:14.091753 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:44:14.091770 kernel: MDS: Mitigation: Clear CPU buffers Jan 24 00:44:14.091791 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:44:14.091809 kernel: active return thunk: its_return_thunk Jan 24 00:44:14.091827 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:44:14.091847 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:44:14.091866 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:44:14.091886 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:44:14.091906 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:44:14.091930 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 24 00:44:14.091951 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:44:14.091970 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:44:14.091990 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:44:14.092010 kernel: landlock: Up and running. Jan 24 00:44:14.092030 kernel: SELinux: Initializing. Jan 24 00:44:14.092050 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.092070 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.092087 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 24 00:44:14.092111 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:44:14.092130 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:44:14.092149 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:44:14.092168 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 24 00:44:14.092185 kernel: signal: max sigframe size: 1776 Jan 24 00:44:14.092202 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:44:14.092222 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:44:14.092239 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:44:14.092256 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:44:14.092299 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:44:14.092332 kernel: .... node #0, CPUs: #1 Jan 24 00:44:14.092350 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:44:14.092369 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:44:14.092388 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:44:14.092407 kernel: smpboot: Max logical packages: 1 Jan 24 00:44:14.092425 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 24 00:44:14.092444 kernel: devtmpfs: initialized Jan 24 00:44:14.092469 kernel: x86/mm: Memory block size: 128MB Jan 24 00:44:14.092488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 24 00:44:14.092506 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:44:14.092525 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:44:14.092543 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:44:14.092561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:44:14.092579 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:44:14.092608 kernel: audit: type=2000 audit(1769215452.738:1): state=initialized audit_enabled=0 res=1 Jan 24 00:44:14.092627 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:44:14.092649 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:44:14.092667 kernel: cpuidle: using governor menu Jan 24 00:44:14.092685 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:44:14.092703 kernel: dca service started, version 1.12.1 Jan 24 00:44:14.092722 kernel: PCI: Using configuration type 1 for base access Jan 24 00:44:14.092740 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:44:14.092758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:44:14.092777 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:44:14.092795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:44:14.092818 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:44:14.092836 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:44:14.092854 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:44:14.092872 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:44:14.092890 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:44:14.092908 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:44:14.092926 kernel: ACPI: Interpreter enabled Jan 24 00:44:14.092942 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:44:14.092961 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:44:14.092983 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:44:14.093001 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:44:14.093020 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 24 00:44:14.093038 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:44:14.093300 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:44:14.093543 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:44:14.093741 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:44:14.093772 kernel: PCI host bridge to bus 0000:00 Jan 24 00:44:14.093954 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:44:14.094127 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:44:14.094298 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:44:14.094488 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 24 00:44:14.094663 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:44:14.094871 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:44:14.095098 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 24 00:44:14.095351 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 24 00:44:14.095547 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:44:14.095754 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 24 00:44:14.095936 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 24 00:44:14.096118 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 24 00:44:14.096336 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:44:14.096524 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 24 00:44:14.096712 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 24 00:44:14.096900 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:44:14.097080 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 24 00:44:14.097260 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 24 00:44:14.097283 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:44:14.097308 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:44:14.097341 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:44:14.097360 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:44:14.097378 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:44:14.097397 kernel: iommu: Default domain type: Translated Jan 24 00:44:14.097416 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:44:14.097434 kernel: efivars: Registered efivars operations Jan 24 00:44:14.097452 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:44:14.097471 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:44:14.097494 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 24 00:44:14.097512 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 24 00:44:14.097530 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 24 00:44:14.097548 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 24 00:44:14.097566 kernel: vgaarb: loaded Jan 24 00:44:14.097585 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:44:14.097610 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:44:14.097629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:44:14.097647 kernel: pnp: PnP ACPI init Jan 24 00:44:14.097670 kernel: pnp: PnP ACPI: found 7 devices Jan 24 00:44:14.097689 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:44:14.097708 kernel: NET: Registered PF_INET protocol family Jan 24 00:44:14.097727 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:44:14.097745 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:44:14.097764 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:44:14.097782 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:44:14.097801 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:44:14.097819 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:44:14.097841 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.097860 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.097879 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:44:14.097897 kernel: NET: Registered PF_XDP protocol family Jan 24 00:44:14.098070 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:44:14.098234 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:44:14.098412 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:44:14.098575 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 24 00:44:14.098772 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:44:14.098796 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:44:14.098815 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:44:14.098834 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 24 00:44:14.098852 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:44:14.098871 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 24 00:44:14.098890 kernel: clocksource: Switched to clocksource tsc Jan 24 00:44:14.098908 kernel: Initialise system trusted keyrings Jan 24 00:44:14.098932 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:44:14.098950 kernel: Key type asymmetric registered Jan 24 00:44:14.098969 kernel: Asymmetric key parser 'x509' registered Jan 24 00:44:14.098987 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:44:14.099005 kernel: io scheduler mq-deadline registered Jan 24 00:44:14.099024 kernel: io scheduler kyber registered Jan 24 00:44:14.099042 kernel: io scheduler bfq registered Jan 24 00:44:14.099060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:44:14.099079 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 24 00:44:14.099266 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 24 00:44:14.099289 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 24 00:44:14.099484 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 24 00:44:14.099508 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 24 00:44:14.099718 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 24 00:44:14.099744 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:44:14.099765 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:44:14.099784 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:44:14.099803 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 24 00:44:14.099828 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 24 00:44:14.100031 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 24 00:44:14.100058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:44:14.100079 kernel: i8042: Warning: Keylock active Jan 24 00:44:14.100098 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:44:14.100118 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:44:14.100333 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:44:14.100525 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:44:14.100713 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:44:13 UTC (1769215453) Jan 24 00:44:14.100887 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:44:14.100911 kernel: intel_pstate: CPU model not supported Jan 24 00:44:14.100931 kernel: pstore: Using crash dump compression: deflate Jan 24 00:44:14.100950 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:44:14.100968 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:44:14.100986 kernel: Segment Routing with IPv6 Jan 24 00:44:14.101005 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:44:14.101030 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:44:14.101049 kernel: Key type dns_resolver registered Jan 24 00:44:14.101068 kernel: IPI shorthand broadcast: enabled Jan 24 00:44:14.101087 kernel: sched_clock: Marking stable (826030131, 130763003)->(971364333, -14571199) Jan 24 00:44:14.101106 kernel: registered taskstats version 1 Jan 24 00:44:14.101124 kernel: Loading compiled-in X.509 certificates Jan 24 00:44:14.101142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:44:14.101162 kernel: Key type .fscrypt registered Jan 24 00:44:14.101180 kernel: Key type fscrypt-provisioning registered Jan 24 00:44:14.101202 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:44:14.101221 kernel: ima: No architecture policies found Jan 24 00:44:14.101239 kernel: clk: Disabling unused clocks Jan 24 00:44:14.101258 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:44:14.101277 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:44:14.101296 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:44:14.101342 kernel: Run /init as init process Jan 24 00:44:14.101360 kernel: with arguments: Jan 24 00:44:14.101379 kernel: /init Jan 24 00:44:14.101401 kernel: with environment: Jan 24 00:44:14.101417 kernel: HOME=/ Jan 24 00:44:14.101435 kernel: TERM=linux Jan 24 00:44:14.101455 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:44:14.101477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:44:14.101500 systemd[1]: Detected virtualization google. Jan 24 00:44:14.101520 systemd[1]: Detected architecture x86-64. Jan 24 00:44:14.101544 systemd[1]: Running in initrd. Jan 24 00:44:14.101563 systemd[1]: No hostname configured, using default hostname. Jan 24 00:44:14.101581 systemd[1]: Hostname set to . Jan 24 00:44:14.101610 systemd[1]: Initializing machine ID from random generator. Jan 24 00:44:14.101629 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:44:14.101648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:44:14.101668 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:44:14.101688 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:44:14.101709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:44:14.101727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:44:14.101746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:44:14.101769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:44:14.101789 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:44:14.101808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:44:14.101829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:44:14.101853 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:44:14.101874 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:44:14.101913 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:44:14.101937 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:44:14.101957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:44:14.101977 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:44:14.102001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:44:14.102022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:44:14.102042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:44:14.102063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:44:14.102083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:44:14.102104 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:44:14.102124 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:44:14.102144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:44:14.102164 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:44:14.102188 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:44:14.102209 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:44:14.102229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:44:14.102249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:14.102305 systemd-journald[184]: Collecting audit messages is disabled. Jan 24 00:44:14.102482 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:44:14.102502 systemd-journald[184]: Journal started Jan 24 00:44:14.102539 systemd-journald[184]: Runtime Journal (/run/log/journal/609e680a874745d18ce42a8cf81fd8ba) is 8.0M, max 148.7M, 140.7M free. Jan 24 00:44:14.105426 systemd-modules-load[185]: Inserted module 'overlay' Jan 24 00:44:14.110574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:44:14.117927 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:44:14.121823 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:44:14.141564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:44:14.154507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:44:14.173636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:44:14.173668 kernel: Bridge firewalling registered Jan 24 00:44:14.160606 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 24 00:44:14.161911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:44:14.163815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:14.170032 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:44:14.177660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:44:14.193972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:14.212977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:44:14.216524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:44:14.237664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:44:14.245420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:14.250790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:44:14.264523 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:44:14.274589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:44:14.294563 dracut-cmdline[217]: dracut-dracut-053 Jan 24 00:44:14.299216 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:14.332632 systemd-resolved[218]: Positive Trust Anchors: Jan 24 00:44:14.333189 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:44:14.333418 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:44:14.340177 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 24 00:44:14.343275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:44:14.356551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:44:14.406361 kernel: SCSI subsystem initialized Jan 24 00:44:14.418364 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:44:14.430353 kernel: iscsi: registered transport (tcp) Jan 24 00:44:14.454503 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:44:14.454594 kernel: QLogic iSCSI HBA Driver Jan 24 00:44:14.508239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:44:14.518586 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:44:14.554665 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:44:14.554756 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:44:14.554785 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:44:14.601376 kernel: raid6: avx2x4 gen() 18184 MB/s Jan 24 00:44:14.618361 kernel: raid6: avx2x2 gen() 18146 MB/s Jan 24 00:44:14.635713 kernel: raid6: avx2x1 gen() 14213 MB/s Jan 24 00:44:14.635753 kernel: raid6: using algorithm avx2x4 gen() 18184 MB/s Jan 24 00:44:14.653790 kernel: raid6: .... xor() 7840 MB/s, rmw enabled Jan 24 00:44:14.653844 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:44:14.676355 kernel: xor: automatically using best checksumming function avx Jan 24 00:44:14.850363 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:44:14.863478 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:44:14.878513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:44:14.895046 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 24 00:44:14.902273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:44:14.909674 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:44:14.941916 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 24 00:44:14.980231 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:44:14.985528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:44:15.077042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:44:15.091599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:44:15.132788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:44:15.143247 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:44:15.152438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:44:15.156748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:44:15.174620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:44:15.195352 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:44:15.195462 kernel: blk-mq: reduced tag depth to 10240 Jan 24 00:44:15.206266 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 24 00:44:15.209378 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:44:15.229637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:44:15.292921 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:44:15.292994 kernel: AES CTR mode by8 optimization enabled Jan 24 00:44:15.300827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:44:15.301275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:15.310652 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:15.332100 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 24 00:44:15.332484 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 24 00:44:15.332745 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 24 00:44:15.332986 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 24 00:44:15.333232 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:44:15.314660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:44:15.314993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:15.319508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:15.346517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:44:15.346558 kernel: GPT:17805311 != 33554431 Jan 24 00:44:15.346583 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:44:15.346621 kernel: GPT:17805311 != 33554431 Jan 24 00:44:15.346643 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:44:15.346671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:15.337462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:15.349483 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 24 00:44:15.376597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:15.382490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:15.422357 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (461) Jan 24 00:44:15.427708 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (456) Jan 24 00:44:15.439962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:15.459406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 24 00:44:15.467396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 24 00:44:15.475371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 24 00:44:15.482152 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 24 00:44:15.482436 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 24 00:44:15.493536 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:44:15.518067 disk-uuid[551]: Primary Header is updated. Jan 24 00:44:15.518067 disk-uuid[551]: Secondary Entries is updated. Jan 24 00:44:15.518067 disk-uuid[551]: Secondary Header is updated. Jan 24 00:44:15.529537 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:15.550342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:15.562340 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:16.562354 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:16.563993 disk-uuid[552]: The operation has completed successfully. Jan 24 00:44:16.642191 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:44:16.642355 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:44:16.671556 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:44:16.693654 sh[569]: Success Jan 24 00:44:16.715343 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:44:16.801506 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:44:16.808935 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:44:16.836922 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:44:16.887866 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:44:16.887972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:16.888000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:44:16.897301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:44:16.909826 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:44:16.941366 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:44:16.947482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:44:16.948488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:44:16.952520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:44:17.030496 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:17.030542 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:17.030570 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:44:17.030594 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:44:17.030620 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:44:17.025676 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:44:17.052819 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:17.067195 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:44:17.074678 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:44:17.171298 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:44:17.218756 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:44:17.279761 ignition[669]: Ignition 2.19.0 Jan 24 00:44:17.280233 ignition[669]: Stage: fetch-offline Jan 24 00:44:17.283013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:44:17.280342 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.290689 systemd-networkd[752]: lo: Link UP Jan 24 00:44:17.280358 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.290695 systemd-networkd[752]: lo: Gained carrier Jan 24 00:44:17.280529 ignition[669]: parsed url from cmdline: "" Jan 24 00:44:17.292341 systemd-networkd[752]: Enumeration completed Jan 24 00:44:17.280537 ignition[669]: no config URL provided Jan 24 00:44:17.292922 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:44:17.280546 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:44:17.292929 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:44:17.280560 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:44:17.295278 systemd-networkd[752]: eth0: Link UP Jan 24 00:44:17.280571 ignition[669]: failed to fetch config: resource requires networking Jan 24 00:44:17.295283 systemd-networkd[752]: eth0: Gained carrier Jan 24 00:44:17.280863 ignition[669]: Ignition finished successfully Jan 24 00:44:17.295293 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:44:17.395832 ignition[760]: Ignition 2.19.0 Jan 24 00:44:17.304096 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:44:17.395845 ignition[760]: Stage: fetch Jan 24 00:44:17.312402 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:44:17.396090 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.312418 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.29/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 24 00:44:17.396103 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.331755 systemd[1]: Reached target network.target - Network. Jan 24 00:44:17.396245 ignition[760]: parsed url from cmdline: "" Jan 24 00:44:17.352551 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:44:17.396250 ignition[760]: no config URL provided Jan 24 00:44:17.410375 unknown[760]: fetched base config from "system" Jan 24 00:44:17.396257 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:44:17.410391 unknown[760]: fetched base config from "system" Jan 24 00:44:17.396268 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:44:17.410428 unknown[760]: fetched user config from "gcp" Jan 24 00:44:17.396292 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 24 00:44:17.412981 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:44:17.400464 ignition[760]: GET result: OK Jan 24 00:44:17.431516 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:44:17.400550 ignition[760]: parsing config with SHA512: a6d82098504671f0c69b463c91ed9c5f7889d3dacb2cabe69fa5dc18d59e68e608ed33f21786d73e8409ad7740e518d1e986605f4885c583001df15da943f257 Jan 24 00:44:17.457849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:44:17.411048 ignition[760]: fetch: fetch complete Jan 24 00:44:17.476521 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:44:17.411054 ignition[760]: fetch: fetch passed Jan 24 00:44:17.517796 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:44:17.411108 ignition[760]: Ignition finished successfully Jan 24 00:44:17.534185 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:44:17.455286 ignition[767]: Ignition 2.19.0 Jan 24 00:44:17.554489 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:44:17.455295 ignition[767]: Stage: kargs Jan 24 00:44:17.569501 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:44:17.455537 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.601525 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:44:17.455550 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.618514 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:44:17.456561 ignition[767]: kargs: kargs passed Jan 24 00:44:17.645536 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:44:17.456617 ignition[767]: Ignition finished successfully Jan 24 00:44:17.503903 ignition[772]: Ignition 2.19.0 Jan 24 00:44:17.503912 ignition[772]: Stage: disks Jan 24 00:44:17.504126 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.504139 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.505515 ignition[772]: disks: disks passed Jan 24 00:44:17.505600 ignition[772]: Ignition finished successfully Jan 24 00:44:17.690191 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:44:17.894451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:44:17.927504 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:44:18.049362 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:44:18.049944 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:44:18.050851 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:44:18.085469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:44:18.101453 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:44:18.121035 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:44:18.192487 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Jan 24 00:44:18.192545 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:18.192573 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:18.192596 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:44:18.192621 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:44:18.192648 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:44:18.121124 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:44:18.121161 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:44:18.165253 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:44:18.202386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:44:18.224552 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:44:18.371601 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:44:18.383471 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:44:18.393455 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:44:18.403455 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:44:18.537884 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:44:18.543449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:44:18.563621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:44:18.592738 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:44:18.609482 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:18.636000 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:44:18.645791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:44:18.671604 ignition[905]: INFO : Ignition 2.19.0 Jan 24 00:44:18.671604 ignition[905]: INFO : Stage: mount Jan 24 00:44:18.671604 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:18.671604 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:18.671604 ignition[905]: INFO : mount: mount passed Jan 24 00:44:18.671604 ignition[905]: INFO : Ignition finished successfully Jan 24 00:44:18.670445 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:44:19.055568 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:44:19.089543 systemd-networkd[752]: eth0: Gained IPv6LL Jan 24 00:44:19.126171 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (918) Jan 24 00:44:19.126214 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:19.126239 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:19.126264 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:44:19.141691 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:44:19.141767 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:44:19.144948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:44:19.183448 ignition[935]: INFO : Ignition 2.19.0 Jan 24 00:44:19.183448 ignition[935]: INFO : Stage: files Jan 24 00:44:19.197476 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:19.197476 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:19.197476 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:44:19.197476 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:44:19.197476 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:44:19.194627 unknown[935]: wrote ssh authorized keys file for user: core Jan 24 00:44:19.333447 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:44:19.442088 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:44:19.837477 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:44:20.479613 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:20.479613 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:44:20.498673 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:44:20.498673 ignition[935]: INFO : files: files passed Jan 24 00:44:20.498673 ignition[935]: INFO : Ignition finished successfully Jan 24 00:44:20.484467 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:44:20.524597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:44:20.574555 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:44:20.584017 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:44:20.731490 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:44:20.731490 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:44:20.584145 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:44:20.769519 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:44:20.655659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:44:20.670663 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:44:20.692558 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:44:20.766898 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:44:20.767024 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:44:20.780723 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:44:20.794723 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:44:20.824702 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:44:20.829536 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:44:20.899365 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:44:20.927538 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:44:20.959037 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:44:20.977762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:44:20.987834 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:44:21.007802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:44:21.008004 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:44:21.040836 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:44:21.051837 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:44:21.068777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:44:21.083779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:44:21.101794 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:44:21.120797 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:44:21.138758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:44:21.155772 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:44:21.176759 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:44:21.193783 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:44:21.224610 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:44:21.225016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:44:21.251728 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:44:21.252137 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:44:21.269754 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:44:21.269926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:44:21.288763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:44:21.288950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:44:21.327742 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:44:21.327952 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:44:21.335833 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:44:21.336009 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:44:21.361680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:44:21.412601 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:44:21.433495 ignition[988]: INFO : Ignition 2.19.0 Jan 24 00:44:21.433495 ignition[988]: INFO : Stage: umount Jan 24 00:44:21.433495 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:21.433495 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:21.433495 ignition[988]: INFO : umount: umount passed Jan 24 00:44:21.433495 ignition[988]: INFO : Ignition finished successfully Jan 24 00:44:21.418639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:44:21.418842 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:44:21.447962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:44:21.448193 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:44:21.492251 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:44:21.493429 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:44:21.493549 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:44:21.497121 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:44:21.497231 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:44:21.514276 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:44:21.514443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:44:21.540350 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:44:21.540495 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:44:21.559656 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:44:21.559720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:44:21.579677 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:44:21.579745 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:44:21.589713 systemd[1]: Stopped target network.target - Network. Jan 24 00:44:21.604694 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:44:21.604777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:44:21.619733 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:44:21.637662 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:44:21.641451 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:44:21.670481 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:44:21.686571 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:44:21.697729 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:44:21.697808 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:44:21.712695 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:44:21.712774 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:44:21.729726 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:44:21.729799 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:44:21.746726 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:44:21.746795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:44:21.763710 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:44:21.763779 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:44:21.780920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:44:21.785381 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 24 00:44:21.808618 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:44:21.828067 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:44:21.828208 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:44:21.839267 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:44:21.839616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:44:21.867659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:44:21.867736 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:44:21.880428 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:44:21.892637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:44:21.892719 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:44:21.920682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:44:21.920746 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:44:21.938715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:44:21.938776 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:44:21.965671 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:44:21.965742 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:44:21.984791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:44:22.012097 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:44:22.012283 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:44:22.045613 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:44:22.045687 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:44:22.058704 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:44:22.451425 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 24 00:44:22.058751 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:44:22.085640 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:44:22.085724 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:44:22.115653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:44:22.115884 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:44:22.140707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:44:22.140789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:22.183587 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:44:22.214436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:44:22.214555 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:44:22.232572 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:44:22.232666 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:44:22.253555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:44:22.253649 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:44:22.275564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:44:22.275660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:22.296103 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:44:22.296235 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:44:22.315880 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:44:22.316000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:44:22.337755 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:44:22.361549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:44:22.405691 systemd[1]: Switching root. Jan 24 00:44:22.678443 systemd-journald[184]: Journal stopped Jan 24 00:44:14.088965 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:44:14.089011 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:14.089030 kernel: BIOS-provided physical RAM map: Jan 24 00:44:14.089044 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 24 00:44:14.089058 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 24 00:44:14.089072 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 24 00:44:14.089090 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 24 00:44:14.089108 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 24 00:44:14.089122 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 24 00:44:14.089137 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 24 00:44:14.089152 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 24 00:44:14.089167 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 24 00:44:14.089182 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 24 00:44:14.089197 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 24 00:44:14.089220 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 24 00:44:14.089236 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 24 00:44:14.089252 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 24 00:44:14.089268 kernel: NX (Execute Disable) protection: active Jan 24 00:44:14.089283 kernel: APIC: Static calls initialized Jan 24 00:44:14.089299 kernel: efi: EFI v2.7 by EDK II Jan 24 00:44:14.089339 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 24 00:44:14.089354 kernel: SMBIOS 2.4 present. Jan 24 00:44:14.089371 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 24 00:44:14.089387 kernel: Hypervisor detected: KVM Jan 24 00:44:14.089408 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:44:14.089425 kernel: kvm-clock: using sched offset of 13338932652 cycles Jan 24 00:44:14.089442 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:44:14.089459 kernel: tsc: Detected 2299.998 MHz processor Jan 24 00:44:14.089475 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:44:14.089493 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:44:14.089510 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 24 00:44:14.089528 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 24 00:44:14.089545 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:44:14.089566 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 24 00:44:14.089582 kernel: Using GB pages for direct mapping Jan 24 00:44:14.089606 kernel: Secure boot disabled Jan 24 00:44:14.089623 kernel: ACPI: Early table checksum verification disabled Jan 24 00:44:14.089639 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 24 00:44:14.089657 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 24 00:44:14.089675 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 24 00:44:14.089700 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 24 00:44:14.089722 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 24 00:44:14.089740 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 24 00:44:14.089759 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 24 00:44:14.089778 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 24 00:44:14.089796 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 24 00:44:14.089814 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 24 00:44:14.089837 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 24 00:44:14.089855 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 24 00:44:14.089874 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 24 00:44:14.089892 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 24 00:44:14.089911 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 24 00:44:14.089929 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 24 00:44:14.089947 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 24 00:44:14.089965 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 24 00:44:14.089983 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 24 00:44:14.090005 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 24 00:44:14.090023 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:44:14.090042 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:44:14.090060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 24 00:44:14.090078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 24 00:44:14.090096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 24 00:44:14.090115 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 24 00:44:14.090133 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 24 00:44:14.090151 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 24 00:44:14.090174 kernel: Zone ranges: Jan 24 00:44:14.090193 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:44:14.090211 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:44:14.090230 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 24 00:44:14.090248 kernel: Movable zone start for each node Jan 24 00:44:14.090266 kernel: Early memory node ranges Jan 24 00:44:14.090285 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 24 00:44:14.090303 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 24 00:44:14.090346 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 24 00:44:14.090369 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 24 00:44:14.090387 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 24 00:44:14.090405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 24 00:44:14.090424 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:44:14.090442 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 24 00:44:14.090460 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 24 00:44:14.090479 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:44:14.090496 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 24 00:44:14.090515 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:44:14.090533 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:44:14.090556 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:44:14.090575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:44:14.090593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:44:14.090617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:44:14.090636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:44:14.090654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:44:14.090673 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:44:14.090691 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 24 00:44:14.090713 kernel: Booting paravirtualized kernel on KVM Jan 24 00:44:14.090732 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:44:14.090751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:44:14.090769 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:44:14.090788 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:44:14.090806 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:44:14.090824 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:44:14.090842 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:44:14.090862 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:14.090886 kernel: random: crng init done Jan 24 00:44:14.090903 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:44:14.090922 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:44:14.090941 kernel: Fallback order for Node 0: 0 Jan 24 00:44:14.090958 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 24 00:44:14.090976 kernel: Policy zone: Normal Jan 24 00:44:14.090995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:44:14.091013 kernel: software IO TLB: area num 2. Jan 24 00:44:14.091032 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347148K reserved, 0K cma-reserved) Jan 24 00:44:14.091056 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:44:14.091073 kernel: Kernel/User page tables isolation: enabled Jan 24 00:44:14.091092 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:44:14.091110 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:44:14.091128 kernel: Dynamic Preempt: voluntary Jan 24 00:44:14.091147 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:44:14.091167 kernel: rcu: RCU event tracing is enabled. Jan 24 00:44:14.091185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:44:14.091223 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:44:14.091248 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:44:14.091268 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:44:14.091291 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:44:14.091329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:44:14.091346 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:44:14.091363 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:44:14.091383 kernel: Console: colour dummy device 80x25 Jan 24 00:44:14.091405 kernel: printk: console [ttyS0] enabled Jan 24 00:44:14.091423 kernel: ACPI: Core revision 20230628 Jan 24 00:44:14.091441 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:44:14.091458 kernel: x2apic enabled Jan 24 00:44:14.091476 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:44:14.091492 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 24 00:44:14.091512 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 24 00:44:14.091530 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 24 00:44:14.091548 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 24 00:44:14.091573 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 24 00:44:14.091593 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:44:14.091620 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 24 00:44:14.091639 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 24 00:44:14.091659 kernel: Spectre V2 : Mitigation: IBRS Jan 24 00:44:14.091678 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:44:14.091698 kernel: RETBleed: Mitigation: IBRS Jan 24 00:44:14.091714 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:44:14.091732 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 24 00:44:14.091753 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:44:14.091770 kernel: MDS: Mitigation: Clear CPU buffers Jan 24 00:44:14.091791 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:44:14.091809 kernel: active return thunk: its_return_thunk Jan 24 00:44:14.091827 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:44:14.091847 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:44:14.091866 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:44:14.091886 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:44:14.091906 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:44:14.091930 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 24 00:44:14.091951 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:44:14.091970 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:44:14.091990 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:44:14.092010 kernel: landlock: Up and running. Jan 24 00:44:14.092030 kernel: SELinux: Initializing. Jan 24 00:44:14.092050 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.092070 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.092087 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 24 00:44:14.092111 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:44:14.092130 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:44:14.092149 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:44:14.092168 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 24 00:44:14.092185 kernel: signal: max sigframe size: 1776 Jan 24 00:44:14.092202 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:44:14.092222 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:44:14.092239 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:44:14.092256 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:44:14.092299 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:44:14.092332 kernel: .... node #0, CPUs: #1 Jan 24 00:44:14.092350 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:44:14.092369 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:44:14.092388 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:44:14.092407 kernel: smpboot: Max logical packages: 1 Jan 24 00:44:14.092425 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 24 00:44:14.092444 kernel: devtmpfs: initialized Jan 24 00:44:14.092469 kernel: x86/mm: Memory block size: 128MB Jan 24 00:44:14.092488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 24 00:44:14.092506 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:44:14.092525 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:44:14.092543 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:44:14.092561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:44:14.092579 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:44:14.092608 kernel: audit: type=2000 audit(1769215452.738:1): state=initialized audit_enabled=0 res=1 Jan 24 00:44:14.092627 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:44:14.092649 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:44:14.092667 kernel: cpuidle: using governor menu Jan 24 00:44:14.092685 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:44:14.092703 kernel: dca service started, version 1.12.1 Jan 24 00:44:14.092722 kernel: PCI: Using configuration type 1 for base access Jan 24 00:44:14.092740 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:44:14.092758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:44:14.092777 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:44:14.092795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:44:14.092818 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:44:14.092836 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:44:14.092854 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:44:14.092872 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:44:14.092890 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:44:14.092908 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:44:14.092926 kernel: ACPI: Interpreter enabled Jan 24 00:44:14.092942 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:44:14.092961 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:44:14.092983 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:44:14.093001 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:44:14.093020 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 24 00:44:14.093038 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:44:14.093300 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:44:14.093543 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:44:14.093741 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:44:14.093772 kernel: PCI host bridge to bus 0000:00 Jan 24 00:44:14.093954 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:44:14.094127 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:44:14.094298 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:44:14.094488 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 24 00:44:14.094663 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:44:14.094871 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:44:14.095098 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 24 00:44:14.095351 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 24 00:44:14.095547 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:44:14.095754 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 24 00:44:14.095936 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 24 00:44:14.096118 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 24 00:44:14.096336 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:44:14.096524 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 24 00:44:14.096712 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 24 00:44:14.096900 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:44:14.097080 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 24 00:44:14.097260 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 24 00:44:14.097283 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:44:14.097308 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:44:14.097341 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:44:14.097360 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:44:14.097378 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:44:14.097397 kernel: iommu: Default domain type: Translated Jan 24 00:44:14.097416 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:44:14.097434 kernel: efivars: Registered efivars operations Jan 24 00:44:14.097452 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:44:14.097471 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:44:14.097494 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 24 00:44:14.097512 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 24 00:44:14.097530 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 24 00:44:14.097548 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 24 00:44:14.097566 kernel: vgaarb: loaded Jan 24 00:44:14.097585 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:44:14.097610 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:44:14.097629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:44:14.097647 kernel: pnp: PnP ACPI init Jan 24 00:44:14.097670 kernel: pnp: PnP ACPI: found 7 devices Jan 24 00:44:14.097689 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:44:14.097708 kernel: NET: Registered PF_INET protocol family Jan 24 00:44:14.097727 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:44:14.097745 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:44:14.097764 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:44:14.097782 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:44:14.097801 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:44:14.097819 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:44:14.097841 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.097860 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:44:14.097879 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:44:14.097897 kernel: NET: Registered PF_XDP protocol family Jan 24 00:44:14.098070 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:44:14.098234 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:44:14.098412 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:44:14.098575 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 24 00:44:14.098772 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:44:14.098796 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:44:14.098815 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:44:14.098834 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 24 00:44:14.098852 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:44:14.098871 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 24 00:44:14.098890 kernel: clocksource: Switched to clocksource tsc Jan 24 00:44:14.098908 kernel: Initialise system trusted keyrings Jan 24 00:44:14.098932 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:44:14.098950 kernel: Key type asymmetric registered Jan 24 00:44:14.098969 kernel: Asymmetric key parser 'x509' registered Jan 24 00:44:14.098987 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:44:14.099005 kernel: io scheduler mq-deadline registered Jan 24 00:44:14.099024 kernel: io scheduler kyber registered Jan 24 00:44:14.099042 kernel: io scheduler bfq registered Jan 24 00:44:14.099060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:44:14.099079 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 24 00:44:14.099266 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 24 00:44:14.099289 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 24 00:44:14.099484 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 24 00:44:14.099508 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 24 00:44:14.099718 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 24 00:44:14.099744 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:44:14.099765 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:44:14.099784 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:44:14.099803 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 24 00:44:14.099828 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 24 00:44:14.100031 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 24 00:44:14.100058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:44:14.100079 kernel: i8042: Warning: Keylock active Jan 24 00:44:14.100098 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:44:14.100118 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:44:14.100333 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:44:14.100525 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:44:14.100713 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:44:13 UTC (1769215453) Jan 24 00:44:14.100887 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:44:14.100911 kernel: intel_pstate: CPU model not supported Jan 24 00:44:14.100931 kernel: pstore: Using crash dump compression: deflate Jan 24 00:44:14.100950 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:44:14.100968 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:44:14.100986 kernel: Segment Routing with IPv6 Jan 24 00:44:14.101005 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:44:14.101030 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:44:14.101049 kernel: Key type dns_resolver registered Jan 24 00:44:14.101068 kernel: IPI shorthand broadcast: enabled Jan 24 00:44:14.101087 kernel: sched_clock: Marking stable (826030131, 130763003)->(971364333, -14571199) Jan 24 00:44:14.101106 kernel: registered taskstats version 1 Jan 24 00:44:14.101124 kernel: Loading compiled-in X.509 certificates Jan 24 00:44:14.101142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:44:14.101162 kernel: Key type .fscrypt registered Jan 24 00:44:14.101180 kernel: Key type fscrypt-provisioning registered Jan 24 00:44:14.101202 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:44:14.101221 kernel: ima: No architecture policies found Jan 24 00:44:14.101239 kernel: clk: Disabling unused clocks Jan 24 00:44:14.101258 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:44:14.101277 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:44:14.101296 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:44:14.101342 kernel: Run /init as init process Jan 24 00:44:14.101360 kernel: with arguments: Jan 24 00:44:14.101379 kernel: /init Jan 24 00:44:14.101401 kernel: with environment: Jan 24 00:44:14.101417 kernel: HOME=/ Jan 24 00:44:14.101435 kernel: TERM=linux Jan 24 00:44:14.101455 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:44:14.101477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:44:14.101500 systemd[1]: Detected virtualization google. Jan 24 00:44:14.101520 systemd[1]: Detected architecture x86-64. Jan 24 00:44:14.101544 systemd[1]: Running in initrd. Jan 24 00:44:14.101563 systemd[1]: No hostname configured, using default hostname. Jan 24 00:44:14.101581 systemd[1]: Hostname set to . Jan 24 00:44:14.101610 systemd[1]: Initializing machine ID from random generator. Jan 24 00:44:14.101629 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:44:14.101648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:44:14.101668 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:44:14.101688 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:44:14.101709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:44:14.101727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:44:14.101746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:44:14.101769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:44:14.101789 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:44:14.101808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:44:14.101829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:44:14.101853 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:44:14.101874 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:44:14.101913 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:44:14.101937 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:44:14.101957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:44:14.101977 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:44:14.102001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:44:14.102022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:44:14.102042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:44:14.102063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:44:14.102083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:44:14.102104 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:44:14.102124 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:44:14.102144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:44:14.102164 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:44:14.102188 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:44:14.102209 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:44:14.102229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:44:14.102249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:14.102305 systemd-journald[184]: Collecting audit messages is disabled. Jan 24 00:44:14.102482 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:44:14.102502 systemd-journald[184]: Journal started Jan 24 00:44:14.102539 systemd-journald[184]: Runtime Journal (/run/log/journal/609e680a874745d18ce42a8cf81fd8ba) is 8.0M, max 148.7M, 140.7M free. Jan 24 00:44:14.105426 systemd-modules-load[185]: Inserted module 'overlay' Jan 24 00:44:14.110574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:44:14.117927 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:44:14.121823 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:44:14.141564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:44:14.154507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:44:14.173636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:44:14.173668 kernel: Bridge firewalling registered Jan 24 00:44:14.160606 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 24 00:44:14.161911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:44:14.163815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:14.170032 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:44:14.177660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:44:14.193972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:14.212977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:44:14.216524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:44:14.237664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:44:14.245420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:14.250790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:44:14.264523 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:44:14.274589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:44:14.294563 dracut-cmdline[217]: dracut-dracut-053 Jan 24 00:44:14.299216 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:14.332632 systemd-resolved[218]: Positive Trust Anchors: Jan 24 00:44:14.333189 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:44:14.333418 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:44:14.340177 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 24 00:44:14.343275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:44:14.356551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:44:14.406361 kernel: SCSI subsystem initialized Jan 24 00:44:14.418364 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:44:14.430353 kernel: iscsi: registered transport (tcp) Jan 24 00:44:14.454503 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:44:14.454594 kernel: QLogic iSCSI HBA Driver Jan 24 00:44:14.508239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:44:14.518586 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:44:14.554665 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:44:14.554756 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:44:14.554785 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:44:14.601376 kernel: raid6: avx2x4 gen() 18184 MB/s Jan 24 00:44:14.618361 kernel: raid6: avx2x2 gen() 18146 MB/s Jan 24 00:44:14.635713 kernel: raid6: avx2x1 gen() 14213 MB/s Jan 24 00:44:14.635753 kernel: raid6: using algorithm avx2x4 gen() 18184 MB/s Jan 24 00:44:14.653790 kernel: raid6: .... xor() 7840 MB/s, rmw enabled Jan 24 00:44:14.653844 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:44:14.676355 kernel: xor: automatically using best checksumming function avx Jan 24 00:44:14.850363 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:44:14.863478 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:44:14.878513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:44:14.895046 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 24 00:44:14.902273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:44:14.909674 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:44:14.941916 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 24 00:44:14.980231 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:44:14.985528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:44:15.077042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:44:15.091599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:44:15.132788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:44:15.143247 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:44:15.152438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:44:15.156748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:44:15.174620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:44:15.195352 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:44:15.195462 kernel: blk-mq: reduced tag depth to 10240 Jan 24 00:44:15.206266 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 24 00:44:15.209378 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:44:15.229637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:44:15.292921 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:44:15.292994 kernel: AES CTR mode by8 optimization enabled Jan 24 00:44:15.300827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:44:15.301275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:15.310652 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:15.332100 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 24 00:44:15.332484 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 24 00:44:15.332745 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 24 00:44:15.332986 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 24 00:44:15.333232 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:44:15.314660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:44:15.314993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:15.319508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:15.346517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:44:15.346558 kernel: GPT:17805311 != 33554431 Jan 24 00:44:15.346583 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:44:15.346621 kernel: GPT:17805311 != 33554431 Jan 24 00:44:15.346643 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:44:15.346671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:15.337462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:15.349483 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 24 00:44:15.376597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:15.382490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:15.422357 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (461) Jan 24 00:44:15.427708 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (456) Jan 24 00:44:15.439962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:15.459406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 24 00:44:15.467396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 24 00:44:15.475371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 24 00:44:15.482152 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 24 00:44:15.482436 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 24 00:44:15.493536 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:44:15.518067 disk-uuid[551]: Primary Header is updated. Jan 24 00:44:15.518067 disk-uuid[551]: Secondary Entries is updated. Jan 24 00:44:15.518067 disk-uuid[551]: Secondary Header is updated. Jan 24 00:44:15.529537 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:15.550342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:15.562340 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:16.562354 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:44:16.563993 disk-uuid[552]: The operation has completed successfully. Jan 24 00:44:16.642191 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:44:16.642355 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:44:16.671556 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:44:16.693654 sh[569]: Success Jan 24 00:44:16.715343 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:44:16.801506 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:44:16.808935 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:44:16.836922 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:44:16.887866 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:44:16.887972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:16.888000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:44:16.897301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:44:16.909826 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:44:16.941366 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:44:16.947482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:44:16.948488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:44:16.952520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:44:17.030496 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:17.030542 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:17.030570 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:44:17.030594 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:44:17.030620 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:44:17.025676 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:44:17.052819 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:17.067195 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:44:17.074678 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:44:17.171298 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:44:17.218756 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:44:17.279761 ignition[669]: Ignition 2.19.0 Jan 24 00:44:17.280233 ignition[669]: Stage: fetch-offline Jan 24 00:44:17.283013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:44:17.280342 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.290689 systemd-networkd[752]: lo: Link UP Jan 24 00:44:17.280358 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.290695 systemd-networkd[752]: lo: Gained carrier Jan 24 00:44:17.280529 ignition[669]: parsed url from cmdline: "" Jan 24 00:44:17.292341 systemd-networkd[752]: Enumeration completed Jan 24 00:44:17.280537 ignition[669]: no config URL provided Jan 24 00:44:17.292922 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:44:17.280546 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:44:17.292929 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:44:17.280560 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:44:17.295278 systemd-networkd[752]: eth0: Link UP Jan 24 00:44:17.280571 ignition[669]: failed to fetch config: resource requires networking Jan 24 00:44:17.295283 systemd-networkd[752]: eth0: Gained carrier Jan 24 00:44:17.280863 ignition[669]: Ignition finished successfully Jan 24 00:44:17.295293 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:44:17.395832 ignition[760]: Ignition 2.19.0 Jan 24 00:44:17.304096 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:44:17.395845 ignition[760]: Stage: fetch Jan 24 00:44:17.312402 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:44:17.396090 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.312418 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.29/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 24 00:44:17.396103 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.331755 systemd[1]: Reached target network.target - Network. Jan 24 00:44:17.396245 ignition[760]: parsed url from cmdline: "" Jan 24 00:44:17.352551 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:44:17.396250 ignition[760]: no config URL provided Jan 24 00:44:17.410375 unknown[760]: fetched base config from "system" Jan 24 00:44:17.396257 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:44:17.410391 unknown[760]: fetched base config from "system" Jan 24 00:44:17.396268 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:44:17.410428 unknown[760]: fetched user config from "gcp" Jan 24 00:44:17.396292 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 24 00:44:17.412981 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:44:17.400464 ignition[760]: GET result: OK Jan 24 00:44:17.431516 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:44:17.400550 ignition[760]: parsing config with SHA512: a6d82098504671f0c69b463c91ed9c5f7889d3dacb2cabe69fa5dc18d59e68e608ed33f21786d73e8409ad7740e518d1e986605f4885c583001df15da943f257 Jan 24 00:44:17.457849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:44:17.411048 ignition[760]: fetch: fetch complete Jan 24 00:44:17.476521 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:44:17.411054 ignition[760]: fetch: fetch passed Jan 24 00:44:17.517796 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:44:17.411108 ignition[760]: Ignition finished successfully Jan 24 00:44:17.534185 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:44:17.455286 ignition[767]: Ignition 2.19.0 Jan 24 00:44:17.554489 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:44:17.455295 ignition[767]: Stage: kargs Jan 24 00:44:17.569501 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:44:17.455537 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.601525 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:44:17.455550 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.618514 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:44:17.456561 ignition[767]: kargs: kargs passed Jan 24 00:44:17.645536 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:44:17.456617 ignition[767]: Ignition finished successfully Jan 24 00:44:17.503903 ignition[772]: Ignition 2.19.0 Jan 24 00:44:17.503912 ignition[772]: Stage: disks Jan 24 00:44:17.504126 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:17.504139 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:17.505515 ignition[772]: disks: disks passed Jan 24 00:44:17.505600 ignition[772]: Ignition finished successfully Jan 24 00:44:17.690191 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:44:17.894451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:44:17.927504 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:44:18.049362 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:44:18.049944 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:44:18.050851 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:44:18.085469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:44:18.101453 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:44:18.121035 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:44:18.192487 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Jan 24 00:44:18.192545 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:18.192573 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:18.192596 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:44:18.192621 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:44:18.192648 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:44:18.121124 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:44:18.121161 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:44:18.165253 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:44:18.202386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:44:18.224552 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:44:18.371601 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:44:18.383471 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:44:18.393455 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:44:18.403455 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:44:18.537884 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:44:18.543449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:44:18.563621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:44:18.592738 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:44:18.609482 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:18.636000 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:44:18.645791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:44:18.671604 ignition[905]: INFO : Ignition 2.19.0 Jan 24 00:44:18.671604 ignition[905]: INFO : Stage: mount Jan 24 00:44:18.671604 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:18.671604 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:18.671604 ignition[905]: INFO : mount: mount passed Jan 24 00:44:18.671604 ignition[905]: INFO : Ignition finished successfully Jan 24 00:44:18.670445 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:44:19.055568 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:44:19.089543 systemd-networkd[752]: eth0: Gained IPv6LL Jan 24 00:44:19.126171 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (918) Jan 24 00:44:19.126214 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:44:19.126239 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:44:19.126264 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:44:19.141691 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:44:19.141767 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:44:19.144948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:44:19.183448 ignition[935]: INFO : Ignition 2.19.0 Jan 24 00:44:19.183448 ignition[935]: INFO : Stage: files Jan 24 00:44:19.197476 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:19.197476 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:19.197476 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:44:19.197476 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:44:19.197476 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:44:19.197476 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:44:19.194627 unknown[935]: wrote ssh authorized keys file for user: core Jan 24 00:44:19.333447 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:44:19.442088 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:19.458493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:44:19.837477 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:44:20.479613 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:44:20.479613 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:44:20.498673 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:44:20.498673 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:44:20.498673 ignition[935]: INFO : files: files passed Jan 24 00:44:20.498673 ignition[935]: INFO : Ignition finished successfully Jan 24 00:44:20.484467 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:44:20.524597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:44:20.574555 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:44:20.584017 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:44:20.731490 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:44:20.731490 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:44:20.584145 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:44:20.769519 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:44:20.655659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:44:20.670663 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:44:20.692558 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:44:20.766898 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:44:20.767024 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:44:20.780723 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:44:20.794723 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:44:20.824702 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:44:20.829536 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:44:20.899365 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:44:20.927538 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:44:20.959037 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:44:20.977762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:44:20.987834 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:44:21.007802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:44:21.008004 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:44:21.040836 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:44:21.051837 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:44:21.068777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:44:21.083779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:44:21.101794 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:44:21.120797 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:44:21.138758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:44:21.155772 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:44:21.176759 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:44:21.193783 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:44:21.224610 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:44:21.225016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:44:21.251728 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:44:21.252137 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:44:21.269754 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:44:21.269926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:44:21.288763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:44:21.288950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:44:21.327742 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:44:21.327952 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:44:21.335833 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:44:21.336009 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:44:21.361680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:44:21.412601 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:44:21.433495 ignition[988]: INFO : Ignition 2.19.0 Jan 24 00:44:21.433495 ignition[988]: INFO : Stage: umount Jan 24 00:44:21.433495 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:44:21.433495 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 24 00:44:21.433495 ignition[988]: INFO : umount: umount passed Jan 24 00:44:21.433495 ignition[988]: INFO : Ignition finished successfully Jan 24 00:44:21.418639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:44:21.418842 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:44:21.447962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:44:21.448193 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:44:21.492251 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:44:21.493429 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:44:21.493549 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:44:21.497121 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:44:21.497231 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:44:21.514276 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:44:21.514443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:44:21.540350 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:44:21.540495 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:44:21.559656 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:44:21.559720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:44:21.579677 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:44:21.579745 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:44:21.589713 systemd[1]: Stopped target network.target - Network. Jan 24 00:44:21.604694 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:44:21.604777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:44:21.619733 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:44:21.637662 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:44:21.641451 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:44:21.670481 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:44:21.686571 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:44:21.697729 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:44:21.697808 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:44:21.712695 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:44:21.712774 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:44:21.729726 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:44:21.729799 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:44:21.746726 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:44:21.746795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:44:21.763710 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:44:21.763779 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:44:21.780920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:44:21.785381 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 24 00:44:21.808618 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:44:21.828067 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:44:21.828208 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:44:21.839267 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:44:21.839616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:44:21.867659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:44:21.867736 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:44:21.880428 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:44:21.892637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:44:21.892719 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:44:21.920682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:44:21.920746 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:44:21.938715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:44:21.938776 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:44:21.965671 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:44:21.965742 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:44:21.984791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:44:22.012097 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:44:22.012283 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:44:22.045613 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:44:22.045687 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:44:22.058704 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:44:22.451425 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 24 00:44:22.058751 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:44:22.085640 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:44:22.085724 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:44:22.115653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:44:22.115884 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:44:22.140707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:44:22.140789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:22.183587 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:44:22.214436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:44:22.214555 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:44:22.232572 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:44:22.232666 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:44:22.253555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:44:22.253649 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:44:22.275564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:44:22.275660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:22.296103 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:44:22.296235 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:44:22.315880 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:44:22.316000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:44:22.337755 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:44:22.361549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:44:22.405691 systemd[1]: Switching root. Jan 24 00:44:22.678443 systemd-journald[184]: Journal stopped Jan 24 00:44:25.149235 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:44:25.149290 kernel: SELinux: policy capability open_perms=1 Jan 24 00:44:25.149324 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:44:25.149343 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:44:25.149361 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:44:25.149379 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:44:25.149399 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:44:25.149423 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:44:25.149442 kernel: audit: type=1403 audit(1769215463.074:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:44:25.149468 systemd[1]: Successfully loaded SELinux policy in 88.577ms. Jan 24 00:44:25.149490 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.618ms. Jan 24 00:44:25.149513 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:44:25.149533 systemd[1]: Detected virtualization google. Jan 24 00:44:25.149554 systemd[1]: Detected architecture x86-64. Jan 24 00:44:25.149580 systemd[1]: Detected first boot. Jan 24 00:44:25.149602 systemd[1]: Initializing machine ID from random generator. Jan 24 00:44:25.149623 zram_generator::config[1030]: No configuration found. Jan 24 00:44:25.149647 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:44:25.149668 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:44:25.149693 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:44:25.149715 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:44:25.149737 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:44:25.149759 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:44:25.149781 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:44:25.149803 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:44:25.149825 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:44:25.149852 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:44:25.149874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:44:25.149908 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:44:25.149930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:44:25.149954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:44:25.149975 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:44:25.149996 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:44:25.150018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:44:25.150044 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:44:25.150066 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:44:25.150088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:44:25.150109 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:44:25.150131 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:44:25.150153 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:44:25.150181 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:44:25.150203 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:44:25.150226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:44:25.150252 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:44:25.150275 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:44:25.150297 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:44:25.150332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:44:25.150355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:44:25.150377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:44:25.150399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:44:25.150426 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:44:25.150451 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:44:25.150473 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:44:25.150501 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:44:25.150527 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:44:25.150559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:44:25.150582 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:44:25.150605 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:44:25.150629 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:44:25.150652 systemd[1]: Reached target machines.target - Containers. Jan 24 00:44:25.150675 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:44:25.150698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:44:25.150721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:44:25.150748 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:44:25.150771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:44:25.150794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:44:25.150816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:44:25.150839 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:44:25.150862 kernel: ACPI: bus type drm_connector registered Jan 24 00:44:25.150889 kernel: fuse: init (API version 7.39) Jan 24 00:44:25.150910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:44:25.150936 kernel: loop: module loaded Jan 24 00:44:25.150960 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:44:25.150982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:44:25.151005 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:44:25.151028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:44:25.151050 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:44:25.151073 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:44:25.151097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:44:25.151153 systemd-journald[1117]: Collecting audit messages is disabled. Jan 24 00:44:25.151203 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:44:25.151226 systemd-journald[1117]: Journal started Jan 24 00:44:25.151273 systemd-journald[1117]: Runtime Journal (/run/log/journal/926863f5170b4a2c94ffd7b8d736d766) is 8.0M, max 148.7M, 140.7M free. Jan 24 00:44:23.940669 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:44:23.961179 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:44:23.961757 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:44:25.187342 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:44:25.217363 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:44:25.236335 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:44:25.236426 systemd[1]: Stopped verity-setup.service. Jan 24 00:44:25.266347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:44:25.275358 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:44:25.285963 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:44:25.296728 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:44:25.306738 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:44:25.316670 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:44:25.326708 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:44:25.336714 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:44:25.346957 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:44:25.358867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:44:25.370849 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:44:25.371103 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:44:25.382862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:44:25.383113 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:44:25.394857 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:44:25.395114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:44:25.405832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:44:25.406103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:44:25.417855 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:44:25.418104 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:44:25.427840 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:44:25.428083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:44:25.437900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:44:25.447845 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:44:25.459843 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:44:25.471861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:44:25.498182 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:44:25.514453 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:44:25.535389 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:44:25.545488 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:44:25.545570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:44:25.557547 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:44:25.580629 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:44:25.598561 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:44:25.608666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:44:25.612834 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:44:25.629658 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:44:25.640512 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:44:25.647630 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:44:25.657576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:44:25.671567 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:44:25.678680 systemd-journald[1117]: Time spent on flushing to /var/log/journal/926863f5170b4a2c94ffd7b8d736d766 is 52.861ms for 929 entries. Jan 24 00:44:25.678680 systemd-journald[1117]: System Journal (/var/log/journal/926863f5170b4a2c94ffd7b8d736d766) is 8.0M, max 584.8M, 576.8M free. Jan 24 00:44:25.777374 systemd-journald[1117]: Received client request to flush runtime journal. Jan 24 00:44:25.698555 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:44:25.717561 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:44:25.743518 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:44:25.762531 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:44:25.776916 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:44:25.801504 kernel: loop0: detected capacity change from 0 to 54824 Jan 24 00:44:25.794255 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:44:25.805976 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:44:25.817949 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:44:25.829995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:44:25.862626 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:44:25.889352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:44:25.892505 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:44:25.897614 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 24 00:44:25.898597 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 24 00:44:25.906706 udevadm[1151]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:44:25.928283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:44:25.940360 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:44:25.957603 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:44:25.970160 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:44:25.976663 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:44:26.032918 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:44:26.083822 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:44:26.106980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:44:26.160343 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:44:26.172403 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 24 00:44:26.172441 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 24 00:44:26.183879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:44:26.245353 kernel: loop4: detected capacity change from 0 to 54824 Jan 24 00:44:26.286808 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:44:26.343445 kernel: loop6: detected capacity change from 0 to 142488 Jan 24 00:44:26.406619 kernel: loop7: detected capacity change from 0 to 224512 Jan 24 00:44:26.436737 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 24 00:44:26.439498 (sd-merge)[1176]: Merged extensions into '/usr'. Jan 24 00:44:26.455544 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:44:26.455569 systemd[1]: Reloading... Jan 24 00:44:26.598373 zram_generator::config[1198]: No configuration found. Jan 24 00:44:26.866160 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:44:26.920291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:44:27.019475 systemd[1]: Reloading finished in 563 ms. Jan 24 00:44:27.052243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:44:27.063121 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:44:27.089596 systemd[1]: Starting ensure-sysext.service... Jan 24 00:44:27.109560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:44:27.121193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:44:27.132404 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:44:27.133102 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:44:27.134925 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:44:27.135542 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 24 00:44:27.135671 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 24 00:44:27.138546 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:44:27.138721 systemd[1]: Reloading... Jan 24 00:44:27.141080 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:44:27.141100 systemd-tmpfiles[1243]: Skipping /boot Jan 24 00:44:27.159051 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:44:27.159078 systemd-tmpfiles[1243]: Skipping /boot Jan 24 00:44:27.255361 zram_generator::config[1270]: No configuration found. Jan 24 00:44:27.381756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:44:27.446779 systemd[1]: Reloading finished in 306 ms. Jan 24 00:44:27.475002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:44:27.497681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:44:27.520446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:44:27.543112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:44:27.554831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:44:27.574020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:44:27.590638 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:44:27.609731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:44:27.610059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:44:27.614701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:44:27.636326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:44:27.637592 augenrules[1333]: No rules Jan 24 00:44:27.655707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:44:27.665635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:44:27.673452 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 24 00:44:27.675435 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:44:27.683403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:44:27.690616 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:44:27.701268 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:44:27.713198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:44:27.713440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:44:27.723067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:44:27.723660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:44:27.736521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:44:27.737796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:44:27.748326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:44:27.760326 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:44:27.771270 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:44:27.837478 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:44:27.877527 systemd[1]: Finished ensure-sysext.service. Jan 24 00:44:27.896445 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:44:27.896852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:44:27.898638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:44:27.904574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:44:27.919533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:44:27.938071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:44:27.956630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:44:27.971874 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:44:27.980717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:44:27.989851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:44:28.000415 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:44:28.019551 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:44:28.029496 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:44:28.029551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:44:28.031863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:44:28.032410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:44:28.052482 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1354) Jan 24 00:44:28.060391 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:44:28.061014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:44:28.070998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:44:28.071618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:44:28.082900 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:44:28.083595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:44:28.099007 systemd-resolved[1321]: Positive Trust Anchors: Jan 24 00:44:28.101406 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:44:28.101476 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:44:28.151261 systemd-resolved[1321]: Defaulting to hostname 'linux'. Jan 24 00:44:28.163199 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:44:28.164486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:44:28.173950 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:44:28.175438 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:44:28.183673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:44:28.194935 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:44:28.204853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:44:28.217361 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:44:28.229346 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:44:28.231596 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 24 00:44:28.258341 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 24 00:44:28.269551 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:44:28.290415 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:44:28.307925 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 24 00:44:28.324566 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:44:28.339467 systemd-networkd[1383]: lo: Link UP Jan 24 00:44:28.339480 systemd-networkd[1383]: lo: Gained carrier Jan 24 00:44:28.350278 systemd-networkd[1383]: Enumeration completed Jan 24 00:44:28.351798 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:44:28.354597 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:44:28.354610 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:44:28.355297 systemd-networkd[1383]: eth0: Link UP Jan 24 00:44:28.355304 systemd-networkd[1383]: eth0: Gained carrier Jan 24 00:44:28.355345 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:44:28.359489 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 24 00:44:28.369460 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 24 00:44:28.370400 systemd-networkd[1383]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:44:28.370424 systemd-networkd[1383]: eth0: DHCPv4 address 10.128.0.29/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 24 00:44:28.403902 systemd[1]: Reached target network.target - Network. Jan 24 00:44:28.412288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:44:28.423364 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:44:28.441691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:28.452173 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:44:28.465051 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:44:28.487680 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:44:28.512482 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:44:28.543754 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:44:28.545161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:44:28.552337 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:44:28.562370 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:44:28.590138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:28.601924 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:44:28.614514 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:44:28.625625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:44:28.636539 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:44:28.647731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:44:28.657608 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:44:28.668466 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:44:28.679452 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:44:28.679518 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:44:28.687448 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:44:28.697034 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:44:28.708207 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:44:28.719869 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:44:28.730308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:44:28.740623 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:44:28.750443 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:44:28.758499 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:44:28.758550 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:44:28.770454 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:44:28.785545 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:44:28.803341 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:44:28.823473 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:44:28.834526 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:44:28.844449 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:44:28.853553 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:44:28.872563 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:44:28.889591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:44:28.898785 jq[1433]: false Jan 24 00:44:28.909541 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:44:28.928572 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:44:28.935457 coreos-metadata[1431]: Jan 24 00:44:28.933 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 24 00:44:28.936804 coreos-metadata[1431]: Jan 24 00:44:28.936 INFO Fetch successful Jan 24 00:44:28.936804 coreos-metadata[1431]: Jan 24 00:44:28.936 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 24 00:44:28.938151 coreos-metadata[1431]: Jan 24 00:44:28.937 INFO Fetch successful Jan 24 00:44:28.938151 coreos-metadata[1431]: Jan 24 00:44:28.938 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 24 00:44:28.939069 coreos-metadata[1431]: Jan 24 00:44:28.938 INFO Fetch successful Jan 24 00:44:28.939069 coreos-metadata[1431]: Jan 24 00:44:28.939 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 24 00:44:28.941693 coreos-metadata[1431]: Jan 24 00:44:28.941 INFO Fetch successful Jan 24 00:44:28.953859 extend-filesystems[1434]: Found loop4 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found loop5 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found loop6 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found loop7 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda1 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda2 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda3 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found usr Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda4 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda6 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda7 Jan 24 00:44:28.953859 extend-filesystems[1434]: Found sda9 Jan 24 00:44:28.953859 extend-filesystems[1434]: Checking size of /dev/sda9 Jan 24 00:44:29.134773 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: ---------------------------------------------------- Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: corporation. Support and training for ntp-4 are Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: available at https://www.nwtime.org/support Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: ---------------------------------------------------- Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:28 ntpd[1438]: proto: precision = 0.116 usec (-23) Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: basedate set to 2026-01-11 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: gps base set to 2026-01-11 (week 2401) Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: Listen normally on 3 eth0 10.128.0.29:123 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: Listen normally on 4 lo [::1]:123 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: bind(21) AF_INET6 fe80::4001:aff:fe80:1d%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1d%2#123 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: failed to init interface for address fe80::4001:aff:fe80:1d%2 Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: Listening on routing socket on fd #21 for interface updates Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:44:29.134855 ntpd[1438]: 24 Jan 00:44:29 ntpd[1438]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:44:29.180208 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 24 00:44:28.956618 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:44:28.981177 ntpd[1438]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:44:29.209117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1355) Jan 24 00:44:29.209166 extend-filesystems[1434]: Resized partition /dev/sda9 Jan 24 00:44:28.968122 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 24 00:44:28.981211 ntpd[1438]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:44:29.255890 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:44:29.255890 extend-filesystems[1459]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:44:29.255890 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:44:29.255890 extend-filesystems[1459]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 24 00:44:28.976711 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:44:29.298744 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:44:28.981227 ntpd[1438]: ---------------------------------------------------- Jan 24 00:44:29.298970 extend-filesystems[1434]: Resized filesystem in /dev/sda9 Jan 24 00:44:28.978496 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:44:28.981242 ntpd[1438]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:44:28.996447 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:44:29.309085 jq[1458]: true Jan 24 00:44:28.981256 ntpd[1438]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:44:29.012591 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:44:29.309598 update_engine[1455]: I20260124 00:44:29.212253 1455 main.cc:92] Flatcar Update Engine starting Jan 24 00:44:29.309598 update_engine[1455]: I20260124 00:44:29.219106 1455 update_check_scheduler.cc:74] Next update check in 11m14s Jan 24 00:44:28.981271 ntpd[1438]: corporation. Support and training for ntp-4 are Jan 24 00:44:29.046161 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:44:28.981285 ntpd[1438]: available at https://www.nwtime.org/support Jan 24 00:44:29.047254 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:44:28.981300 ntpd[1438]: ---------------------------------------------------- Jan 24 00:44:29.047781 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:44:28.992933 ntpd[1438]: proto: precision = 0.116 usec (-23) Jan 24 00:44:29.048019 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:44:29.319115 jq[1467]: true Jan 24 00:44:28.995151 dbus-daemon[1432]: [system] SELinux support is enabled Jan 24 00:44:29.099923 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:44:29.001594 ntpd[1438]: basedate set to 2026-01-11 Jan 24 00:44:29.101172 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:44:29.001620 ntpd[1438]: gps base set to 2026-01-11 (week 2401) Jan 24 00:44:29.103931 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:44:29.015450 ntpd[1438]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:44:29.103962 systemd-logind[1449]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 24 00:44:29.015512 ntpd[1438]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:44:29.103994 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:44:29.027494 ntpd[1438]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:44:29.107047 systemd-logind[1449]: New seat seat0. Jan 24 00:44:29.027558 ntpd[1438]: Listen normally on 3 eth0 10.128.0.29:123 Jan 24 00:44:29.112714 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:44:29.327741 tar[1466]: linux-amd64/LICENSE Jan 24 00:44:29.327741 tar[1466]: linux-amd64/helm Jan 24 00:44:29.027624 ntpd[1438]: Listen normally on 4 lo [::1]:123 Jan 24 00:44:29.176559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:44:29.027704 ntpd[1438]: bind(21) AF_INET6 fe80::4001:aff:fe80:1d%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:44:29.176607 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:44:29.027735 ntpd[1438]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1d%2#123 Jan 24 00:44:29.222532 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:44:29.027755 ntpd[1438]: failed to init interface for address fe80::4001:aff:fe80:1d%2 Jan 24 00:44:29.222568 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:44:29.027810 ntpd[1438]: Listening on routing socket on fd #21 for interface updates Jan 24 00:44:29.233969 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:44:29.034554 dbus-daemon[1432]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1383 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:44:29.234234 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:44:29.044039 ntpd[1438]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:44:29.266120 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:44:29.044079 ntpd[1438]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:44:29.320190 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:44:29.192105 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:44:29.343401 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:44:29.363916 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:44:29.388933 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:44:29.397789 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:44:29.412207 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:44:29.416738 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:44:29.430759 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:44:29.450187 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:44:29.457578 systemd-networkd[1383]: eth0: Gained IPv6LL Jan 24 00:44:29.462250 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:44:29.473671 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:44:29.473951 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:44:29.511760 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:44:29.520957 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:44:29.541579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:29.559708 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:44:29.577829 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 24 00:44:29.594735 systemd[1]: Started sshd@0-10.128.0.29:22-4.153.228.146:43140.service - OpenSSH per-connection server daemon (4.153.228.146:43140). Jan 24 00:44:29.613868 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:44:29.625475 init.sh[1519]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 24 00:44:29.634933 init.sh[1519]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 24 00:44:29.634933 init.sh[1519]: + /usr/bin/google_instance_setup Jan 24 00:44:29.630365 systemd[1]: Starting sshkeys.service... Jan 24 00:44:29.628859 dbus-daemon[1432]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1509 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:44:29.648014 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:44:29.683890 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:44:29.718241 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:44:29.744932 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:44:29.765821 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:44:29.824098 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:44:29.835645 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:44:29.861243 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:44:29.864770 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:44:29.868808 coreos-metadata[1538]: Jan 24 00:44:29.868 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 24 00:44:29.878264 coreos-metadata[1538]: Jan 24 00:44:29.877 INFO Fetch failed with 404: resource not found Jan 24 00:44:29.879640 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:44:29.881547 coreos-metadata[1538]: Jan 24 00:44:29.881 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 24 00:44:29.885348 coreos-metadata[1538]: Jan 24 00:44:29.884 INFO Fetch successful Jan 24 00:44:29.885348 coreos-metadata[1538]: Jan 24 00:44:29.884 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 24 00:44:29.889291 coreos-metadata[1538]: Jan 24 00:44:29.886 INFO Fetch failed with 404: resource not found Jan 24 00:44:29.889291 coreos-metadata[1538]: Jan 24 00:44:29.887 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 24 00:44:29.889291 coreos-metadata[1538]: Jan 24 00:44:29.889 INFO Fetch failed with 404: resource not found Jan 24 00:44:29.889291 coreos-metadata[1538]: Jan 24 00:44:29.889 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 24 00:44:29.889983 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:44:29.894195 coreos-metadata[1538]: Jan 24 00:44:29.893 INFO Fetch successful Jan 24 00:44:29.897391 polkitd[1530]: Started polkitd version 121 Jan 24 00:44:29.910675 unknown[1538]: wrote ssh authorized keys file for user: core Jan 24 00:44:29.940014 polkitd[1530]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:44:29.940111 polkitd[1530]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:44:29.945112 polkitd[1530]: Finished loading, compiling and executing 2 rules Jan 24 00:44:29.948469 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:44:29.948683 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:44:29.952761 polkitd[1530]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:44:29.988903 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:44:29.988740 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:44:30.003074 systemd[1]: Finished sshkeys.service. Jan 24 00:44:30.065230 systemd-hostnamed[1509]: Hostname set to (transient) Jan 24 00:44:30.071371 systemd-resolved[1321]: System hostname changed to 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58'. Jan 24 00:44:30.210232 sshd[1520]: Accepted publickey for core from 4.153.228.146 port 43140 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:30.221022 sshd[1520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:30.237984 containerd[1468]: time="2026-01-24T00:44:30.234695853Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:44:30.244920 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:44:30.264759 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:44:30.281442 systemd-logind[1449]: New session 1 of user core. Jan 24 00:44:30.323614 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:44:30.345719 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:44:30.351420 containerd[1468]: time="2026-01-24T00:44:30.351367604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.362964 containerd[1468]: time="2026-01-24T00:44:30.362457656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:44:30.362964 containerd[1468]: time="2026-01-24T00:44:30.362511792Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:44:30.362964 containerd[1468]: time="2026-01-24T00:44:30.362538774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.364643240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.364687615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.364784279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.364806923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.367559336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.367601134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.367627807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.367647360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.368124 containerd[1468]: time="2026-01-24T00:44:30.367796553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.370365 containerd[1468]: time="2026-01-24T00:44:30.369435031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:44:30.370365 containerd[1468]: time="2026-01-24T00:44:30.369675801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:44:30.370365 containerd[1468]: time="2026-01-24T00:44:30.369703574Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:44:30.370365 containerd[1468]: time="2026-01-24T00:44:30.369845347Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:44:30.370365 containerd[1468]: time="2026-01-24T00:44:30.369916136Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:44:30.377803 containerd[1468]: time="2026-01-24T00:44:30.377769874Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:44:30.380349 containerd[1468]: time="2026-01-24T00:44:30.378493750Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:44:30.380349 containerd[1468]: time="2026-01-24T00:44:30.378860006Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:44:30.380349 containerd[1468]: time="2026-01-24T00:44:30.378896238Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:44:30.380349 containerd[1468]: time="2026-01-24T00:44:30.378923873Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:44:30.380349 containerd[1468]: time="2026-01-24T00:44:30.379633513Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:44:30.386785 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387511240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387724391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387753031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387775595Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387802074Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387825727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387847243Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387872084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387897812Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387928195Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387949885Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.387971863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.388010441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.390013 containerd[1468]: time="2026-01-24T00:44:30.388035368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388071261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388094917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388116343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388137822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388159323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388181187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388203086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388228211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388248195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388268702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.392752 containerd[1468]: time="2026-01-24T00:44:30.388291194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394207724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394272530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394301550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394357594Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394471808Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394504649Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394537411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394558477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394574978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394595273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394620067Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:44:30.397532 containerd[1468]: time="2026-01-24T00:44:30.394636200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:44:30.398109 containerd[1468]: time="2026-01-24T00:44:30.395066984Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:44:30.399850 containerd[1468]: time="2026-01-24T00:44:30.399097523Z" level=info msg="Connect containerd service" Jan 24 00:44:30.399850 containerd[1468]: time="2026-01-24T00:44:30.399172789Z" level=info msg="using legacy CRI server" Jan 24 00:44:30.399850 containerd[1468]: time="2026-01-24T00:44:30.399188507Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:44:30.399850 containerd[1468]: time="2026-01-24T00:44:30.399406935Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:44:30.403695 containerd[1468]: time="2026-01-24T00:44:30.402922625Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:44:30.404165 containerd[1468]: time="2026-01-24T00:44:30.403845074Z" level=info msg="Start subscribing containerd event" Jan 24 00:44:30.404165 containerd[1468]: time="2026-01-24T00:44:30.403923713Z" level=info msg="Start recovering state" Jan 24 00:44:30.404165 containerd[1468]: time="2026-01-24T00:44:30.404015066Z" level=info msg="Start event monitor" Jan 24 00:44:30.404165 containerd[1468]: time="2026-01-24T00:44:30.404039524Z" level=info msg="Start snapshots syncer" Jan 24 00:44:30.404165 containerd[1468]: time="2026-01-24T00:44:30.404054736Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:44:30.404165 containerd[1468]: time="2026-01-24T00:44:30.404066525Z" level=info msg="Start streaming server" Jan 24 00:44:30.410196 containerd[1468]: time="2026-01-24T00:44:30.408011277Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:44:30.410196 containerd[1468]: time="2026-01-24T00:44:30.408098230Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:44:30.410196 containerd[1468]: time="2026-01-24T00:44:30.409490867Z" level=info msg="containerd successfully booted in 0.179364s" Jan 24 00:44:30.408292 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:44:30.704772 systemd[1565]: Queued start job for default target default.target. Jan 24 00:44:30.716242 systemd[1565]: Created slice app.slice - User Application Slice. Jan 24 00:44:30.716819 systemd[1565]: Reached target paths.target - Paths. Jan 24 00:44:30.716848 systemd[1565]: Reached target timers.target - Timers. Jan 24 00:44:30.735492 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:44:30.774217 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:44:30.774449 systemd[1565]: Reached target sockets.target - Sockets. Jan 24 00:44:30.774475 systemd[1565]: Reached target basic.target - Basic System. Jan 24 00:44:30.774545 systemd[1565]: Reached target default.target - Main User Target. Jan 24 00:44:30.774615 systemd[1565]: Startup finished in 369ms. Jan 24 00:44:30.775619 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:44:30.794462 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:44:30.877736 instance-setup[1522]: INFO Running google_set_multiqueue. Jan 24 00:44:30.907598 instance-setup[1522]: INFO Set channels for eth0 to 2. Jan 24 00:44:30.914274 instance-setup[1522]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 24 00:44:30.916763 instance-setup[1522]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 24 00:44:30.917560 instance-setup[1522]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 24 00:44:30.921554 instance-setup[1522]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 24 00:44:30.922260 instance-setup[1522]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 24 00:44:30.927127 instance-setup[1522]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 24 00:44:30.927368 instance-setup[1522]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 24 00:44:30.929304 instance-setup[1522]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 24 00:44:30.948526 instance-setup[1522]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 24 00:44:30.968518 instance-setup[1522]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 24 00:44:30.977585 instance-setup[1522]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 24 00:44:30.980480 instance-setup[1522]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 24 00:44:30.992352 tar[1466]: linux-amd64/README.md Jan 24 00:44:31.033170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:44:31.070344 init.sh[1519]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 24 00:44:31.238089 startup-script[1609]: INFO Starting startup scripts. Jan 24 00:44:31.244018 startup-script[1609]: INFO No startup scripts found in metadata. Jan 24 00:44:31.244093 startup-script[1609]: INFO Finished running startup scripts. Jan 24 00:44:31.264828 init.sh[1519]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 24 00:44:31.264828 init.sh[1519]: + daemon_pids=() Jan 24 00:44:31.265404 init.sh[1519]: + for d in accounts clock_skew network Jan 24 00:44:31.266030 init.sh[1519]: + daemon_pids+=($!) Jan 24 00:44:31.266030 init.sh[1519]: + for d in accounts clock_skew network Jan 24 00:44:31.266152 init.sh[1612]: + /usr/bin/google_accounts_daemon Jan 24 00:44:31.266922 init.sh[1519]: + daemon_pids+=($!) Jan 24 00:44:31.266922 init.sh[1519]: + for d in accounts clock_skew network Jan 24 00:44:31.267008 init.sh[1613]: + /usr/bin/google_clock_skew_daemon Jan 24 00:44:31.267671 init.sh[1519]: + daemon_pids+=($!) Jan 24 00:44:31.267765 init.sh[1614]: + /usr/bin/google_network_daemon Jan 24 00:44:31.268067 init.sh[1519]: + NOTIFY_SOCKET=/run/systemd/notify Jan 24 00:44:31.268067 init.sh[1519]: + /usr/bin/systemd-notify --ready Jan 24 00:44:31.288233 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 24 00:44:31.303344 init.sh[1519]: + wait -n 1612 1613 1614 Jan 24 00:44:31.597441 groupadd[1618]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 24 00:44:31.605977 groupadd[1618]: group added to /etc/gshadow: name=google-sudoers Jan 24 00:44:31.685653 google-networking[1614]: INFO Starting Google Networking daemon. Jan 24 00:44:31.706905 groupadd[1618]: new group: name=google-sudoers, GID=1000 Jan 24 00:44:31.712045 google-clock-skew[1613]: INFO Starting Google Clock Skew daemon. Jan 24 00:44:31.721919 google-clock-skew[1613]: INFO Clock drift token has changed: 0. Jan 24 00:44:31.744699 google-accounts[1612]: INFO Starting Google Accounts daemon. Jan 24 00:44:31.757295 google-accounts[1612]: WARNING OS Login not installed. Jan 24 00:44:31.758965 google-accounts[1612]: INFO Creating a new user account for 0. Jan 24 00:44:31.763720 init.sh[1632]: useradd: invalid user name '0': use --badname to ignore Jan 24 00:44:31.763900 google-accounts[1612]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 24 00:44:31.982114 ntpd[1438]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1d%2]:123 Jan 24 00:44:31.982693 ntpd[1438]: 24 Jan 00:44:31 ntpd[1438]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1d%2]:123 Jan 24 00:44:32.000156 systemd-resolved[1321]: Clock change detected. Flushing caches. Jan 24 00:44:32.001410 google-clock-skew[1613]: INFO Synced system time with hardware clock. Jan 24 00:44:32.015937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:32.028256 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:44:32.038835 systemd[1]: Startup finished in 1.000s (kernel) + 9.301s (initrd) + 9.141s (userspace) = 19.442s. Jan 24 00:44:32.039713 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:44:32.901289 kubelet[1639]: E0124 00:44:32.901162 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:44:32.904829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:44:32.905095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:44:32.905579 systemd[1]: kubelet.service: Consumed 1.281s CPU time. Jan 24 00:44:40.899568 systemd[1]: Started sshd@1-10.128.0.29:22-4.153.228.146:42490.service - OpenSSH per-connection server daemon (4.153.228.146:42490). Jan 24 00:44:41.127595 sshd[1650]: Accepted publickey for core from 4.153.228.146 port 42490 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:41.129573 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:41.136253 systemd-logind[1449]: New session 2 of user core. Jan 24 00:44:41.142419 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:44:41.300711 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:41.306639 systemd[1]: sshd@1-10.128.0.29:22-4.153.228.146:42490.service: Deactivated successfully. Jan 24 00:44:41.309089 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:44:41.310065 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:44:41.311742 systemd-logind[1449]: Removed session 2. Jan 24 00:44:41.349609 systemd[1]: Started sshd@2-10.128.0.29:22-4.153.228.146:42504.service - OpenSSH per-connection server daemon (4.153.228.146:42504). Jan 24 00:44:41.566736 sshd[1657]: Accepted publickey for core from 4.153.228.146 port 42504 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:41.568697 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:41.574147 systemd-logind[1449]: New session 3 of user core. Jan 24 00:44:41.584444 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:44:41.731920 sshd[1657]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:41.737856 systemd[1]: sshd@2-10.128.0.29:22-4.153.228.146:42504.service: Deactivated successfully. Jan 24 00:44:41.740356 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:44:41.741411 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:44:41.742873 systemd-logind[1449]: Removed session 3. Jan 24 00:44:41.775640 systemd[1]: Started sshd@3-10.128.0.29:22-4.153.228.146:42512.service - OpenSSH per-connection server daemon (4.153.228.146:42512). Jan 24 00:44:42.002390 sshd[1664]: Accepted publickey for core from 4.153.228.146 port 42512 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:42.004490 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:42.011105 systemd-logind[1449]: New session 4 of user core. Jan 24 00:44:42.017441 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:44:42.173944 sshd[1664]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:42.178559 systemd[1]: sshd@3-10.128.0.29:22-4.153.228.146:42512.service: Deactivated successfully. Jan 24 00:44:42.181079 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:44:42.182923 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:44:42.184930 systemd-logind[1449]: Removed session 4. Jan 24 00:44:42.225563 systemd[1]: Started sshd@4-10.128.0.29:22-4.153.228.146:42516.service - OpenSSH per-connection server daemon (4.153.228.146:42516). Jan 24 00:44:42.475801 sshd[1671]: Accepted publickey for core from 4.153.228.146 port 42516 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:42.478646 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:42.485240 systemd-logind[1449]: New session 5 of user core. Jan 24 00:44:42.490479 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:44:42.649452 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:44:42.649993 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:42.667231 sudo[1674]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:42.703794 sshd[1671]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:42.708644 systemd[1]: sshd@4-10.128.0.29:22-4.153.228.146:42516.service: Deactivated successfully. Jan 24 00:44:42.711258 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:44:42.713172 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:44:42.714949 systemd-logind[1449]: Removed session 5. Jan 24 00:44:42.754679 systemd[1]: Started sshd@5-10.128.0.29:22-4.153.228.146:42528.service - OpenSSH per-connection server daemon (4.153.228.146:42528). Jan 24 00:44:42.924206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:44:42.934785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:42.973858 sshd[1679]: Accepted publickey for core from 4.153.228.146 port 42528 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:42.975832 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:42.983762 systemd-logind[1449]: New session 6 of user core. Jan 24 00:44:42.990451 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:44:43.127530 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:44:43.128093 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:43.136739 sudo[1686]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:43.154342 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:44:43.154849 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:43.173609 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:44:43.176725 auditctl[1689]: No rules Jan 24 00:44:43.178278 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:44:43.178669 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:44:43.181482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:44:43.231880 augenrules[1708]: No rules Jan 24 00:44:43.234544 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:44:43.236474 sudo[1685]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:43.257815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:43.269956 sshd[1679]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:43.271749 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:44:43.277899 systemd[1]: sshd@5-10.128.0.29:22-4.153.228.146:42528.service: Deactivated successfully. Jan 24 00:44:43.282520 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:44:43.285068 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:44:43.288609 systemd-logind[1449]: Removed session 6. Jan 24 00:44:43.317332 systemd[1]: Started sshd@6-10.128.0.29:22-4.153.228.146:42532.service - OpenSSH per-connection server daemon (4.153.228.146:42532). Jan 24 00:44:43.341110 kubelet[1717]: E0124 00:44:43.341052 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:44:43.347482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:44:43.347758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:44:43.550884 sshd[1726]: Accepted publickey for core from 4.153.228.146 port 42532 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:44:43.551867 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:43.558463 systemd-logind[1449]: New session 7 of user core. Jan 24 00:44:43.572441 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:44:43.694420 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:44:43.694923 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:44.148033 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:44:44.160894 (dockerd)[1746]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:44:44.614531 dockerd[1746]: time="2026-01-24T00:44:44.614360687Z" level=info msg="Starting up" Jan 24 00:44:44.767624 dockerd[1746]: time="2026-01-24T00:44:44.766776001Z" level=info msg="Loading containers: start." Jan 24 00:44:44.919213 kernel: Initializing XFRM netlink socket Jan 24 00:44:45.031497 systemd-networkd[1383]: docker0: Link UP Jan 24 00:44:45.055392 dockerd[1746]: time="2026-01-24T00:44:45.055335812Z" level=info msg="Loading containers: done." Jan 24 00:44:45.079194 dockerd[1746]: time="2026-01-24T00:44:45.079111586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:44:45.079543 dockerd[1746]: time="2026-01-24T00:44:45.079342141Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:44:45.079543 dockerd[1746]: time="2026-01-24T00:44:45.079527456Z" level=info msg="Daemon has completed initialization" Jan 24 00:44:45.121325 dockerd[1746]: time="2026-01-24T00:44:45.121225280Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:44:45.121806 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:44:46.037836 containerd[1468]: time="2026-01-24T00:44:46.037773091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:44:46.493006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066887599.mount: Deactivated successfully. Jan 24 00:44:48.286656 containerd[1468]: time="2026-01-24T00:44:48.286581639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:48.288374 containerd[1468]: time="2026-01-24T00:44:48.288323353Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29078734" Jan 24 00:44:48.290997 containerd[1468]: time="2026-01-24T00:44:48.289292851Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:48.292825 containerd[1468]: time="2026-01-24T00:44:48.292782616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:48.294456 containerd[1468]: time="2026-01-24T00:44:48.294412523Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.256586428s" Jan 24 00:44:48.294613 containerd[1468]: time="2026-01-24T00:44:48.294585597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:44:48.295906 containerd[1468]: time="2026-01-24T00:44:48.295870739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:44:53.598083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:44:53.603902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:53.950829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:53.957516 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:44:54.007843 kubelet[1951]: E0124 00:44:54.007797 1951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:44:54.011347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:44:54.011588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:45:00.001043 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:45:04.261975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:45:04.267781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:04.614374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:04.627744 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:45:04.681296 kubelet[1969]: E0124 00:45:04.681251 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:45:04.684405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:45:04.684760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:45:14.606428 update_engine[1455]: I20260124 00:45:14.606310 1455 update_attempter.cc:509] Updating boot flags... Jan 24 00:45:14.671208 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1985) Jan 24 00:45:14.686612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:45:14.697475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:14.797410 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1988) Jan 24 00:45:14.906590 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1988) Jan 24 00:45:15.162686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:15.169112 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:45:15.219062 kubelet[2005]: E0124 00:45:15.218988 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:45:15.221959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:45:15.222254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:45:18.297478 containerd[1468]: time="2026-01-24T00:45:18.297396243Z" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v2/kube-controller-manager/manifests/v1.32.11\": dial tcp 34.96.108.209:443: i/o timeout" host=registry.k8s.io Jan 24 00:45:18.299150 containerd[1468]: time="2026-01-24T00:45:18.299031124Z" level=error msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" failed" error="rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"registry.k8s.io/kube-controller-manager:v1.32.11\": failed to resolve reference \"registry.k8s.io/kube-controller-manager:v1.32.11\": failed to do request: Head \"https://registry.k8s.io/v2/kube-controller-manager/manifests/v1.32.11\": dial tcp 34.96.108.209:443: i/o timeout" Jan 24 00:45:18.299150 containerd[1468]: time="2026-01-24T00:45:18.299085606Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=0" Jan 24 00:45:18.300013 containerd[1468]: time="2026-01-24T00:45:18.299785769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:45:19.868897 containerd[1468]: time="2026-01-24T00:45:19.867372331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:19.868897 containerd[1468]: time="2026-01-24T00:45:19.868846847Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24995412" Jan 24 00:45:19.869935 containerd[1468]: time="2026-01-24T00:45:19.869862529Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:19.873458 containerd[1468]: time="2026-01-24T00:45:19.873398738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:19.875132 containerd[1468]: time="2026-01-24T00:45:19.874903975Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.575072866s" Jan 24 00:45:19.875132 containerd[1468]: time="2026-01-24T00:45:19.874952046Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:45:19.876220 containerd[1468]: time="2026-01-24T00:45:19.876154719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:45:21.630098 containerd[1468]: time="2026-01-24T00:45:21.630017331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:21.631753 containerd[1468]: time="2026-01-24T00:45:21.631676238Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19407116" Jan 24 00:45:21.633331 containerd[1468]: time="2026-01-24T00:45:21.632762048Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:21.636271 containerd[1468]: time="2026-01-24T00:45:21.636227684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:21.637819 containerd[1468]: time="2026-01-24T00:45:21.637774248Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.761561988s" Jan 24 00:45:21.637968 containerd[1468]: time="2026-01-24T00:45:21.637942314Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:45:21.639347 containerd[1468]: time="2026-01-24T00:45:21.639313009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:45:22.742170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110437266.mount: Deactivated successfully. Jan 24 00:45:23.485220 containerd[1468]: time="2026-01-24T00:45:23.485133037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:23.486829 containerd[1468]: time="2026-01-24T00:45:23.486450064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31163922" Jan 24 00:45:23.489232 containerd[1468]: time="2026-01-24T00:45:23.487923231Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:23.491169 containerd[1468]: time="2026-01-24T00:45:23.491118925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:23.492640 containerd[1468]: time="2026-01-24T00:45:23.492284618Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.852926148s" Jan 24 00:45:23.492821 containerd[1468]: time="2026-01-24T00:45:23.492790833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:45:23.495090 containerd[1468]: time="2026-01-24T00:45:23.495048380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:45:23.921823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767162414.mount: Deactivated successfully. Jan 24 00:45:25.150839 containerd[1468]: time="2026-01-24T00:45:25.150775331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:25.152481 containerd[1468]: time="2026-01-24T00:45:25.152418632Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18572327" Jan 24 00:45:25.154075 containerd[1468]: time="2026-01-24T00:45:25.153521848Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:25.157853 containerd[1468]: time="2026-01-24T00:45:25.157270137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:25.158845 containerd[1468]: time="2026-01-24T00:45:25.158798518Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.663695098s" Jan 24 00:45:25.158946 containerd[1468]: time="2026-01-24T00:45:25.158852100Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:45:25.159453 containerd[1468]: time="2026-01-24T00:45:25.159420729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:45:25.280724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 24 00:45:25.287512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:25.584127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:25.590940 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:45:25.643528 kubelet[2088]: E0124 00:45:25.643468 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:45:25.646860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:45:25.647122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:45:25.732768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828641471.mount: Deactivated successfully. Jan 24 00:45:25.739334 containerd[1468]: time="2026-01-24T00:45:25.739268533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:25.740534 containerd[1468]: time="2026-01-24T00:45:25.740465050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322136" Jan 24 00:45:25.743212 containerd[1468]: time="2026-01-24T00:45:25.741525891Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:25.744578 containerd[1468]: time="2026-01-24T00:45:25.744535927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:25.745710 containerd[1468]: time="2026-01-24T00:45:25.745672271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 586.205913ms" Jan 24 00:45:25.745860 containerd[1468]: time="2026-01-24T00:45:25.745836108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:45:25.746815 containerd[1468]: time="2026-01-24T00:45:25.746775637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:45:26.159942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount813602852.mount: Deactivated successfully. Jan 24 00:45:28.401788 containerd[1468]: time="2026-01-24T00:45:28.401711953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:28.403528 containerd[1468]: time="2026-01-24T00:45:28.403466179Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57690069" Jan 24 00:45:28.405248 containerd[1468]: time="2026-01-24T00:45:28.404611570Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:28.408351 containerd[1468]: time="2026-01-24T00:45:28.408305242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:28.410037 containerd[1468]: time="2026-01-24T00:45:28.409991236Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.663179766s" Jan 24 00:45:28.410244 containerd[1468]: time="2026-01-24T00:45:28.410214414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:45:31.699385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:31.712633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:31.759778 systemd[1]: Reloading requested from client PID 2181 ('systemctl') (unit session-7.scope)... Jan 24 00:45:31.759956 systemd[1]: Reloading... Jan 24 00:45:31.932263 zram_generator::config[2221]: No configuration found. Jan 24 00:45:32.097672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:45:32.201596 systemd[1]: Reloading finished in 440 ms. Jan 24 00:45:32.271031 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:45:32.271168 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:45:32.271722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:32.277675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:32.565418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:32.574890 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:45:32.633477 kubelet[2273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:45:32.633954 kubelet[2273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:45:32.633954 kubelet[2273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:45:32.634137 kubelet[2273]: I0124 00:45:32.634075 2273 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:45:33.249531 kubelet[2273]: I0124 00:45:33.248342 2273 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:45:33.249531 kubelet[2273]: I0124 00:45:33.248394 2273 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:45:33.249531 kubelet[2273]: I0124 00:45:33.248873 2273 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:45:33.290342 kubelet[2273]: I0124 00:45:33.289969 2273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:45:33.297217 kubelet[2273]: E0124 00:45:33.296384 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:33.307954 kubelet[2273]: E0124 00:45:33.307878 2273 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:45:33.307954 kubelet[2273]: I0124 00:45:33.307925 2273 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:45:33.312639 kubelet[2273]: I0124 00:45:33.312585 2273 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:45:33.315202 kubelet[2273]: I0124 00:45:33.315117 2273 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:45:33.315462 kubelet[2273]: I0124 00:45:33.315197 2273 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:45:33.315665 kubelet[2273]: I0124 00:45:33.315469 2273 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:45:33.315665 kubelet[2273]: I0124 00:45:33.315487 2273 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:45:33.315770 kubelet[2273]: I0124 00:45:33.315669 2273 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:45:33.322932 kubelet[2273]: I0124 00:45:33.322867 2273 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:45:33.322932 kubelet[2273]: I0124 00:45:33.322931 2273 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:45:33.324386 kubelet[2273]: I0124 00:45:33.322972 2273 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:45:33.324386 kubelet[2273]: I0124 00:45:33.323004 2273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:45:33.334485 kubelet[2273]: W0124 00:45:33.334412 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58&limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:33.334644 kubelet[2273]: E0124 00:45:33.334497 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58&limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:33.334706 kubelet[2273]: I0124 00:45:33.334642 2273 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:45:33.335562 kubelet[2273]: I0124 00:45:33.335393 2273 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:45:33.336568 kubelet[2273]: W0124 00:45:33.336051 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:33.336568 kubelet[2273]: E0124 00:45:33.336125 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:33.336764 kubelet[2273]: W0124 00:45:33.336727 2273 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:45:33.340016 kubelet[2273]: I0124 00:45:33.339975 2273 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:45:33.341775 kubelet[2273]: I0124 00:45:33.341210 2273 server.go:1287] "Started kubelet" Jan 24 00:45:33.343248 kubelet[2273]: I0124 00:45:33.343026 2273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:45:33.354507 kubelet[2273]: E0124 00:45:33.349071 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.29:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58.188d842763e6ee44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,UID:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,},FirstTimestamp:2026-01-24 00:45:33.341142596 +0000 UTC m=+0.760827849,LastTimestamp:2026-01-24 00:45:33.341142596 +0000 UTC m=+0.760827849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,}" Jan 24 00:45:33.354507 kubelet[2273]: I0124 00:45:33.352947 2273 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:45:33.355828 kubelet[2273]: I0124 00:45:33.355788 2273 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:45:33.357978 kubelet[2273]: I0124 00:45:33.356244 2273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:45:33.357978 kubelet[2273]: I0124 00:45:33.357086 2273 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:45:33.357978 kubelet[2273]: I0124 00:45:33.357437 2273 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:45:33.358318 kubelet[2273]: E0124 00:45:33.358290 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" Jan 24 00:45:33.358834 kubelet[2273]: I0124 00:45:33.358803 2273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:45:33.363339 kubelet[2273]: E0124 00:45:33.363301 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58?timeout=10s\": dial tcp 10.128.0.29:6443: connect: connection refused" interval="200ms" Jan 24 00:45:33.363801 kubelet[2273]: I0124 00:45:33.363776 2273 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:45:33.363901 kubelet[2273]: I0124 00:45:33.363854 2273 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:45:33.364247 kubelet[2273]: I0124 00:45:33.364226 2273 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:45:33.364480 kubelet[2273]: I0124 00:45:33.364456 2273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:45:33.366685 kubelet[2273]: W0124 00:45:33.366633 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:33.366908 kubelet[2273]: E0124 00:45:33.366860 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:33.367221 kubelet[2273]: I0124 00:45:33.367200 2273 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:45:33.386961 kubelet[2273]: I0124 00:45:33.386877 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:45:33.388748 kubelet[2273]: I0124 00:45:33.388685 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:45:33.388748 kubelet[2273]: I0124 00:45:33.388720 2273 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:45:33.388748 kubelet[2273]: I0124 00:45:33.388748 2273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:45:33.388963 kubelet[2273]: I0124 00:45:33.388759 2273 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:45:33.388963 kubelet[2273]: E0124 00:45:33.388837 2273 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:45:33.395798 kubelet[2273]: E0124 00:45:33.395753 2273 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:45:33.407527 kubelet[2273]: W0124 00:45:33.407220 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:33.407527 kubelet[2273]: E0124 00:45:33.407332 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:33.450775 kubelet[2273]: I0124 00:45:33.417667 2273 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:45:33.450775 kubelet[2273]: I0124 00:45:33.417689 2273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:45:33.450775 kubelet[2273]: I0124 00:45:33.417710 2273 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:45:33.459404 kubelet[2273]: E0124 00:45:33.459343 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" Jan 24 00:45:33.489881 kubelet[2273]: E0124 00:45:33.489805 2273 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:45:33.506898 kubelet[2273]: I0124 00:45:33.506714 2273 policy_none.go:49] "None policy: Start" Jan 24 00:45:33.506898 kubelet[2273]: I0124 00:45:33.506761 2273 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:45:33.506898 kubelet[2273]: I0124 00:45:33.506783 2273 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:45:33.560221 kubelet[2273]: E0124 00:45:33.560157 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" Jan 24 00:45:33.613232 kubelet[2273]: E0124 00:45:33.564859 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58?timeout=10s\": dial tcp 10.128.0.29:6443: connect: connection refused" interval="400ms" Jan 24 00:45:33.622790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:45:33.633132 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:45:33.638432 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:45:33.650438 kubelet[2273]: I0124 00:45:33.650403 2273 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:45:33.652209 kubelet[2273]: I0124 00:45:33.651330 2273 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:45:33.652209 kubelet[2273]: I0124 00:45:33.651355 2273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:45:33.652209 kubelet[2273]: I0124 00:45:33.651871 2273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:45:33.653632 kubelet[2273]: E0124 00:45:33.653606 2273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:45:33.653904 kubelet[2273]: E0124 00:45:33.653881 2273 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" Jan 24 00:45:33.713468 systemd[1]: Created slice kubepods-burstable-podcc888bd65bafc8620233c2200cb495ab.slice - libcontainer container kubepods-burstable-podcc888bd65bafc8620233c2200cb495ab.slice. Jan 24 00:45:33.733542 kubelet[2273]: E0124 00:45:33.733456 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.737791 systemd[1]: Created slice kubepods-burstable-pod0f016896f28684c7418990ddbb5a4367.slice - libcontainer container kubepods-burstable-pod0f016896f28684c7418990ddbb5a4367.slice. Jan 24 00:45:33.750311 kubelet[2273]: E0124 00:45:33.750249 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.755050 systemd[1]: Created slice kubepods-burstable-podf9465ed014e1cd6726ed750c4aed4cf5.slice - libcontainer container kubepods-burstable-podf9465ed014e1cd6726ed750c4aed4cf5.slice. Jan 24 00:45:33.757411 kubelet[2273]: I0124 00:45:33.756838 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.758489 kubelet[2273]: E0124 00:45:33.757572 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.29:6443/api/v1/nodes\": dial tcp 10.128.0.29:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.759325 kubelet[2273]: E0124 00:45:33.759273 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765748 kubelet[2273]: I0124 00:45:33.765442 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc888bd65bafc8620233c2200cb495ab-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"cc888bd65bafc8620233c2200cb495ab\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765748 kubelet[2273]: I0124 00:45:33.765499 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc888bd65bafc8620233c2200cb495ab-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"cc888bd65bafc8620233c2200cb495ab\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765748 kubelet[2273]: I0124 00:45:33.765532 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765748 kubelet[2273]: I0124 00:45:33.765561 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765988 kubelet[2273]: I0124 00:45:33.765593 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765988 kubelet[2273]: I0124 00:45:33.765624 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9465ed014e1cd6726ed750c4aed4cf5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"f9465ed014e1cd6726ed750c4aed4cf5\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765988 kubelet[2273]: I0124 00:45:33.765653 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc888bd65bafc8620233c2200cb495ab-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"cc888bd65bafc8620233c2200cb495ab\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.765988 kubelet[2273]: I0124 00:45:33.765683 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.766119 kubelet[2273]: I0124 00:45:33.765721 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.962870 kubelet[2273]: I0124 00:45:33.962815 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.963318 kubelet[2273]: E0124 00:45:33.963273 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.29:6443/api/v1/nodes\": dial tcp 10.128.0.29:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:33.965762 kubelet[2273]: E0124 00:45:33.965664 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58?timeout=10s\": dial tcp 10.128.0.29:6443: connect: connection refused" interval="800ms" Jan 24 00:45:34.035310 containerd[1468]: time="2026-01-24T00:45:34.035237907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,Uid:cc888bd65bafc8620233c2200cb495ab,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:34.051894 containerd[1468]: time="2026-01-24T00:45:34.051797044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,Uid:0f016896f28684c7418990ddbb5a4367,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:34.061281 containerd[1468]: time="2026-01-24T00:45:34.061230466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,Uid:f9465ed014e1cd6726ed750c4aed4cf5,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:34.368836 kubelet[2273]: I0124 00:45:34.368694 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:34.369243 kubelet[2273]: E0124 00:45:34.369199 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.29:6443/api/v1/nodes\": dial tcp 10.128.0.29:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:34.579990 kubelet[2273]: W0124 00:45:34.579929 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:34.579990 kubelet[2273]: E0124 00:45:34.579995 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:34.674126 kubelet[2273]: W0124 00:45:34.673936 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:34.674126 kubelet[2273]: E0124 00:45:34.674026 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:34.766398 kubelet[2273]: E0124 00:45:34.766329 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58?timeout=10s\": dial tcp 10.128.0.29:6443: connect: connection refused" interval="1.6s" Jan 24 00:45:34.801769 kubelet[2273]: W0124 00:45:34.801674 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58&limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:34.801923 kubelet[2273]: E0124 00:45:34.801774 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58&limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:34.822732 kubelet[2273]: W0124 00:45:34.822647 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.29:6443: connect: connection refused Jan 24 00:45:34.822732 kubelet[2273]: E0124 00:45:34.822727 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:34.842781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034521220.mount: Deactivated successfully. Jan 24 00:45:34.852208 containerd[1468]: time="2026-01-24T00:45:34.850436032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:34.852208 containerd[1468]: time="2026-01-24T00:45:34.851666632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:34.853106 containerd[1468]: time="2026-01-24T00:45:34.853037964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:45:34.854250 containerd[1468]: time="2026-01-24T00:45:34.853851494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313054" Jan 24 00:45:34.855296 containerd[1468]: time="2026-01-24T00:45:34.855251590Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:34.857170 containerd[1468]: time="2026-01-24T00:45:34.857118454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:34.857766 containerd[1468]: time="2026-01-24T00:45:34.857689613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:45:34.861208 containerd[1468]: time="2026-01-24T00:45:34.860345217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:34.863130 containerd[1468]: time="2026-01-24T00:45:34.863081684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 827.726261ms" Jan 24 00:45:34.865602 containerd[1468]: time="2026-01-24T00:45:34.865562995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 813.658011ms" Jan 24 00:45:34.876611 containerd[1468]: time="2026-01-24T00:45:34.876561909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 815.21672ms" Jan 24 00:45:35.065904 containerd[1468]: time="2026-01-24T00:45:35.065765265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:35.066518 containerd[1468]: time="2026-01-24T00:45:35.065846642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:35.066518 containerd[1468]: time="2026-01-24T00:45:35.065932571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.066518 containerd[1468]: time="2026-01-24T00:45:35.066238041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.079849 containerd[1468]: time="2026-01-24T00:45:35.079648438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:35.079849 containerd[1468]: time="2026-01-24T00:45:35.079734022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:35.080167 containerd[1468]: time="2026-01-24T00:45:35.079911365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.080167 containerd[1468]: time="2026-01-24T00:45:35.080051739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.081424 containerd[1468]: time="2026-01-24T00:45:35.080414091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:35.081424 containerd[1468]: time="2026-01-24T00:45:35.081263564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:35.081424 containerd[1468]: time="2026-01-24T00:45:35.081312840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.083479 containerd[1468]: time="2026-01-24T00:45:35.083338058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.117434 systemd[1]: Started cri-containerd-dccf6ee95510556df79ab39ab7fca7140687a08773a0d1534ed851713aedda17.scope - libcontainer container dccf6ee95510556df79ab39ab7fca7140687a08773a0d1534ed851713aedda17. Jan 24 00:45:35.134987 systemd[1]: Started cri-containerd-942a04099e0845387e8d1d5eb888b3847eaff42ebaeb9379f7634dfa90275be4.scope - libcontainer container 942a04099e0845387e8d1d5eb888b3847eaff42ebaeb9379f7634dfa90275be4. Jan 24 00:45:35.138278 systemd[1]: Started cri-containerd-a23378e2facc0c760e9e672a3da80363b4fad2bb892e8606e728a70920f4dffe.scope - libcontainer container a23378e2facc0c760e9e672a3da80363b4fad2bb892e8606e728a70920f4dffe. Jan 24 00:45:35.173902 kubelet[2273]: I0124 00:45:35.173868 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:35.175303 kubelet[2273]: E0124 00:45:35.175114 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.29:6443/api/v1/nodes\": dial tcp 10.128.0.29:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:35.230256 containerd[1468]: time="2026-01-24T00:45:35.228692128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,Uid:cc888bd65bafc8620233c2200cb495ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"942a04099e0845387e8d1d5eb888b3847eaff42ebaeb9379f7634dfa90275be4\"" Jan 24 00:45:35.238548 kubelet[2273]: E0124 00:45:35.238225 2273 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9d" Jan 24 00:45:35.245659 containerd[1468]: time="2026-01-24T00:45:35.245523195Z" level=info msg="CreateContainer within sandbox \"942a04099e0845387e8d1d5eb888b3847eaff42ebaeb9379f7634dfa90275be4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:45:35.274760 containerd[1468]: time="2026-01-24T00:45:35.273941492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,Uid:0f016896f28684c7418990ddbb5a4367,Namespace:kube-system,Attempt:0,} returns sandbox id \"dccf6ee95510556df79ab39ab7fca7140687a08773a0d1534ed851713aedda17\"" Jan 24 00:45:35.279711 kubelet[2273]: E0124 00:45:35.279309 2273 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516" Jan 24 00:45:35.281209 containerd[1468]: time="2026-01-24T00:45:35.281150900Z" level=info msg="CreateContainer within sandbox \"dccf6ee95510556df79ab39ab7fca7140687a08773a0d1534ed851713aedda17\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:45:35.285372 containerd[1468]: time="2026-01-24T00:45:35.285320757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58,Uid:f9465ed014e1cd6726ed750c4aed4cf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23378e2facc0c760e9e672a3da80363b4fad2bb892e8606e728a70920f4dffe\"" Jan 24 00:45:35.287630 kubelet[2273]: E0124 00:45:35.287445 2273 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9d" Jan 24 00:45:35.289272 containerd[1468]: time="2026-01-24T00:45:35.289111020Z" level=info msg="CreateContainer within sandbox \"a23378e2facc0c760e9e672a3da80363b4fad2bb892e8606e728a70920f4dffe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:45:35.293408 containerd[1468]: time="2026-01-24T00:45:35.293356385Z" level=info msg="CreateContainer within sandbox \"942a04099e0845387e8d1d5eb888b3847eaff42ebaeb9379f7634dfa90275be4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed48198767bc825aebb411cec22b6691be66cb0c53e82fa9867a7d1b969917ab\"" Jan 24 00:45:35.294632 containerd[1468]: time="2026-01-24T00:45:35.294024080Z" level=info msg="StartContainer for \"ed48198767bc825aebb411cec22b6691be66cb0c53e82fa9867a7d1b969917ab\"" Jan 24 00:45:35.308708 containerd[1468]: time="2026-01-24T00:45:35.308621756Z" level=info msg="CreateContainer within sandbox \"a23378e2facc0c760e9e672a3da80363b4fad2bb892e8606e728a70920f4dffe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"53ffbeee37db89838f3b3155c09f8b55cc97c3b31c7e5d97ef084d754f663b0b\"" Jan 24 00:45:35.309753 containerd[1468]: time="2026-01-24T00:45:35.309717346Z" level=info msg="StartContainer for \"53ffbeee37db89838f3b3155c09f8b55cc97c3b31c7e5d97ef084d754f663b0b\"" Jan 24 00:45:35.319454 containerd[1468]: time="2026-01-24T00:45:35.317425600Z" level=info msg="CreateContainer within sandbox \"dccf6ee95510556df79ab39ab7fca7140687a08773a0d1534ed851713aedda17\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b752aeb4b83b9dc4dc55eb48cf8a98e1d39de2914fe1b8e0f13870808c7e59a\"" Jan 24 00:45:35.320205 containerd[1468]: time="2026-01-24T00:45:35.319801432Z" level=info msg="StartContainer for \"1b752aeb4b83b9dc4dc55eb48cf8a98e1d39de2914fe1b8e0f13870808c7e59a\"" Jan 24 00:45:35.353443 systemd[1]: Started cri-containerd-ed48198767bc825aebb411cec22b6691be66cb0c53e82fa9867a7d1b969917ab.scope - libcontainer container ed48198767bc825aebb411cec22b6691be66cb0c53e82fa9867a7d1b969917ab. Jan 24 00:45:35.381890 systemd[1]: Started cri-containerd-53ffbeee37db89838f3b3155c09f8b55cc97c3b31c7e5d97ef084d754f663b0b.scope - libcontainer container 53ffbeee37db89838f3b3155c09f8b55cc97c3b31c7e5d97ef084d754f663b0b. Jan 24 00:45:35.412234 systemd[1]: Started cri-containerd-1b752aeb4b83b9dc4dc55eb48cf8a98e1d39de2914fe1b8e0f13870808c7e59a.scope - libcontainer container 1b752aeb4b83b9dc4dc55eb48cf8a98e1d39de2914fe1b8e0f13870808c7e59a. Jan 24 00:45:35.449928 kubelet[2273]: E0124 00:45:35.449851 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.29:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:45:35.484211 containerd[1468]: time="2026-01-24T00:45:35.484030855Z" level=info msg="StartContainer for \"ed48198767bc825aebb411cec22b6691be66cb0c53e82fa9867a7d1b969917ab\" returns successfully" Jan 24 00:45:35.535458 containerd[1468]: time="2026-01-24T00:45:35.535400924Z" level=info msg="StartContainer for \"53ffbeee37db89838f3b3155c09f8b55cc97c3b31c7e5d97ef084d754f663b0b\" returns successfully" Jan 24 00:45:35.567100 containerd[1468]: time="2026-01-24T00:45:35.567046686Z" level=info msg="StartContainer for \"1b752aeb4b83b9dc4dc55eb48cf8a98e1d39de2914fe1b8e0f13870808c7e59a\" returns successfully" Jan 24 00:45:36.445034 kubelet[2273]: E0124 00:45:36.444993 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:36.445593 kubelet[2273]: E0124 00:45:36.445493 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:36.449851 kubelet[2273]: E0124 00:45:36.449817 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:36.781137 kubelet[2273]: I0124 00:45:36.781099 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:37.453253 kubelet[2273]: E0124 00:45:37.451743 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:37.455751 kubelet[2273]: E0124 00:45:37.454879 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:37.455751 kubelet[2273]: E0124 00:45:37.455552 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:38.454865 kubelet[2273]: E0124 00:45:38.454618 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:38.457056 kubelet[2273]: E0124 00:45:38.456872 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:38.890914 kubelet[2273]: E0124 00:45:38.890849 2273 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:38.936234 kubelet[2273]: I0124 00:45:38.935307 2273 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:38.959479 kubelet[2273]: I0124 00:45:38.959422 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:39.008819 kubelet[2273]: E0124 00:45:39.008742 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:39.008819 kubelet[2273]: I0124 00:45:39.008797 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:39.019663 kubelet[2273]: E0124 00:45:39.019615 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:39.019663 kubelet[2273]: I0124 00:45:39.019661 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:39.027338 kubelet[2273]: E0124 00:45:39.027295 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:39.333963 kubelet[2273]: I0124 00:45:39.332606 2273 apiserver.go:52] "Watching apiserver" Jan 24 00:45:39.364758 kubelet[2273]: I0124 00:45:39.364706 2273 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:45:41.059487 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-7.scope)... Jan 24 00:45:41.059508 systemd[1]: Reloading... Jan 24 00:45:41.185614 zram_generator::config[2590]: No configuration found. Jan 24 00:45:41.335070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:45:41.461117 systemd[1]: Reloading finished in 400 ms. Jan 24 00:45:41.516286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:41.532042 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:45:41.532441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:41.532541 systemd[1]: kubelet.service: Consumed 1.303s CPU time, 132.0M memory peak, 0B memory swap peak. Jan 24 00:45:41.537752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:41.874780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:41.889791 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:45:41.966959 kubelet[2637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:45:41.966959 kubelet[2637]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:45:41.966959 kubelet[2637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:45:41.967576 kubelet[2637]: I0124 00:45:41.967065 2637 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:45:41.975617 kubelet[2637]: I0124 00:45:41.975563 2637 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:45:41.975617 kubelet[2637]: I0124 00:45:41.975597 2637 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:45:41.976065 kubelet[2637]: I0124 00:45:41.976023 2637 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:45:41.977842 kubelet[2637]: I0124 00:45:41.977798 2637 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:45:41.981603 kubelet[2637]: I0124 00:45:41.981297 2637 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:45:41.989028 kubelet[2637]: E0124 00:45:41.988988 2637 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:45:41.989028 kubelet[2637]: I0124 00:45:41.989028 2637 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:45:41.992097 kubelet[2637]: I0124 00:45:41.992069 2637 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:45:41.992448 kubelet[2637]: I0124 00:45:41.992392 2637 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:45:41.992702 kubelet[2637]: I0124 00:45:41.992430 2637 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:45:41.992872 kubelet[2637]: I0124 00:45:41.992705 2637 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:45:41.992872 kubelet[2637]: I0124 00:45:41.992723 2637 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:45:41.992872 kubelet[2637]: I0124 00:45:41.992794 2637 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:45:41.993067 kubelet[2637]: I0124 00:45:41.993050 2637 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:45:41.993141 kubelet[2637]: I0124 00:45:41.993097 2637 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:45:41.993141 kubelet[2637]: I0124 00:45:41.993126 2637 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:45:41.993141 kubelet[2637]: I0124 00:45:41.993141 2637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:45:41.997788 kubelet[2637]: I0124 00:45:41.997756 2637 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:45:42.000202 kubelet[2637]: I0124 00:45:41.998656 2637 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:45:42.003209 kubelet[2637]: I0124 00:45:42.000984 2637 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:45:42.003424 kubelet[2637]: I0124 00:45:42.003406 2637 server.go:1287] "Started kubelet" Jan 24 00:45:42.012209 kubelet[2637]: I0124 00:45:42.006061 2637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:45:42.020640 kubelet[2637]: I0124 00:45:42.008768 2637 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:45:42.021453 kubelet[2637]: I0124 00:45:42.021126 2637 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:45:42.021453 kubelet[2637]: E0124 00:45:42.021423 2637 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" not found" Jan 24 00:45:42.023526 kubelet[2637]: I0124 00:45:42.021914 2637 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:45:42.027517 kubelet[2637]: I0124 00:45:42.008832 2637 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:45:42.027646 kubelet[2637]: I0124 00:45:42.027632 2637 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:45:42.027710 kubelet[2637]: I0124 00:45:42.009224 2637 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:45:42.029982 kubelet[2637]: I0124 00:45:42.022480 2637 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:45:42.029982 kubelet[2637]: I0124 00:45:42.022638 2637 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:45:42.049299 kubelet[2637]: I0124 00:45:42.049194 2637 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:45:42.049299 kubelet[2637]: I0124 00:45:42.049234 2637 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:45:42.049520 kubelet[2637]: I0124 00:45:42.049345 2637 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:45:42.049901 kubelet[2637]: E0124 00:45:42.049708 2637 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:45:42.082713 kubelet[2637]: I0124 00:45:42.081948 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:45:42.087141 kubelet[2637]: I0124 00:45:42.085687 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:45:42.087141 kubelet[2637]: I0124 00:45:42.085733 2637 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:45:42.087141 kubelet[2637]: I0124 00:45:42.085763 2637 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:45:42.087141 kubelet[2637]: I0124 00:45:42.085776 2637 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:45:42.087141 kubelet[2637]: E0124 00:45:42.085839 2637 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135442 2637 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135464 2637 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135490 2637 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135732 2637 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135748 2637 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135774 2637 policy_none.go:49] "None policy: Start" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135790 2637 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135805 2637 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:45:42.136298 kubelet[2637]: I0124 00:45:42.135989 2637 state_mem.go:75] "Updated machine memory state" Jan 24 00:45:42.145995 kubelet[2637]: I0124 00:45:42.145967 2637 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:45:42.147288 kubelet[2637]: I0124 00:45:42.146788 2637 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:45:42.147288 kubelet[2637]: I0124 00:45:42.146813 2637 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:45:42.148326 kubelet[2637]: I0124 00:45:42.148241 2637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:45:42.151665 kubelet[2637]: E0124 00:45:42.151635 2637 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:45:42.190238 kubelet[2637]: I0124 00:45:42.186668 2637 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.190238 kubelet[2637]: I0124 00:45:42.187151 2637 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.190238 kubelet[2637]: I0124 00:45:42.187952 2637 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.199737 kubelet[2637]: W0124 00:45:42.199291 2637 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 24 00:45:42.200297 kubelet[2637]: W0124 00:45:42.200277 2637 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 24 00:45:42.200576 kubelet[2637]: W0124 00:45:42.200557 2637 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 24 00:45:42.263856 kubelet[2637]: I0124 00:45:42.263823 2637 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.275560 kubelet[2637]: I0124 00:45:42.275513 2637 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.276014 kubelet[2637]: I0124 00:45:42.275953 2637 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.332408 kubelet[2637]: I0124 00:45:42.332302 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc888bd65bafc8620233c2200cb495ab-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"cc888bd65bafc8620233c2200cb495ab\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.332739 kubelet[2637]: I0124 00:45:42.332678 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc888bd65bafc8620233c2200cb495ab-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"cc888bd65bafc8620233c2200cb495ab\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.332974 kubelet[2637]: I0124 00:45:42.332903 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.333152 kubelet[2637]: I0124 00:45:42.333071 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.333308 kubelet[2637]: I0124 00:45:42.333266 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.333402 kubelet[2637]: I0124 00:45:42.333363 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9465ed014e1cd6726ed750c4aed4cf5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"f9465ed014e1cd6726ed750c4aed4cf5\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.333462 kubelet[2637]: I0124 00:45:42.333399 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc888bd65bafc8620233c2200cb495ab-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"cc888bd65bafc8620233c2200cb495ab\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.333530 kubelet[2637]: I0124 00:45:42.333459 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.333530 kubelet[2637]: I0124 00:45:42.333520 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f016896f28684c7418990ddbb5a4367-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" (UID: \"0f016896f28684c7418990ddbb5a4367\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:45:42.995028 kubelet[2637]: I0124 00:45:42.994616 2637 apiserver.go:52] "Watching apiserver" Jan 24 00:45:43.032676 kubelet[2637]: I0124 00:45:43.032617 2637 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:45:43.069711 kubelet[2637]: I0124 00:45:43.069207 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" podStartSLOduration=1.069156557 podStartE2EDuration="1.069156557s" podCreationTimestamp="2026-01-24 00:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:43.057422936 +0000 UTC m=+1.159986782" watchObservedRunningTime="2026-01-24 00:45:43.069156557 +0000 UTC m=+1.171720392" Jan 24 00:45:43.084688 kubelet[2637]: I0124 00:45:43.083969 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" podStartSLOduration=1.083944778 podStartE2EDuration="1.083944778s" podCreationTimestamp="2026-01-24 00:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:43.070669864 +0000 UTC m=+1.173233709" watchObservedRunningTime="2026-01-24 00:45:43.083944778 +0000 UTC m=+1.186508618" Jan 24 00:45:43.102075 kubelet[2637]: I0124 00:45:43.101088 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" podStartSLOduration=1.101059598 podStartE2EDuration="1.101059598s" podCreationTimestamp="2026-01-24 00:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:43.084440411 +0000 UTC m=+1.187004257" watchObservedRunningTime="2026-01-24 00:45:43.101059598 +0000 UTC m=+1.203623469" Jan 24 00:45:48.157024 kubelet[2637]: I0124 00:45:48.156959 2637 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:45:48.157722 containerd[1468]: time="2026-01-24T00:45:48.157461858Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:45:48.158244 kubelet[2637]: I0124 00:45:48.157741 2637 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:45:48.756251 systemd[1]: Created slice kubepods-besteffort-pod1cfffd16_1ea2_4e5f_a1f6_6bc46e49044c.slice - libcontainer container kubepods-besteffort-pod1cfffd16_1ea2_4e5f_a1f6_6bc46e49044c.slice. Jan 24 00:45:48.781345 kubelet[2637]: I0124 00:45:48.781053 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c-lib-modules\") pod \"kube-proxy-t5xh9\" (UID: \"1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c\") " pod="kube-system/kube-proxy-t5xh9" Jan 24 00:45:48.781345 kubelet[2637]: I0124 00:45:48.781107 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c-kube-proxy\") pod \"kube-proxy-t5xh9\" (UID: \"1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c\") " pod="kube-system/kube-proxy-t5xh9" Jan 24 00:45:48.781345 kubelet[2637]: I0124 00:45:48.781139 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c-xtables-lock\") pod \"kube-proxy-t5xh9\" (UID: \"1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c\") " pod="kube-system/kube-proxy-t5xh9" Jan 24 00:45:48.781345 kubelet[2637]: I0124 00:45:48.781169 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-876hb\" (UniqueName: \"kubernetes.io/projected/1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c-kube-api-access-876hb\") pod \"kube-proxy-t5xh9\" (UID: \"1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c\") " pod="kube-system/kube-proxy-t5xh9" Jan 24 00:45:49.071266 containerd[1468]: time="2026-01-24T00:45:49.070259751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5xh9,Uid:1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:49.118465 containerd[1468]: time="2026-01-24T00:45:49.118046518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:49.118465 containerd[1468]: time="2026-01-24T00:45:49.118128105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:49.118465 containerd[1468]: time="2026-01-24T00:45:49.118149315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.118465 containerd[1468]: time="2026-01-24T00:45:49.118288091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.165482 systemd[1]: Started cri-containerd-26525898d7381b17f688bca679bf07d4e110425ba55adf336b45d3bcbe2e73d6.scope - libcontainer container 26525898d7381b17f688bca679bf07d4e110425ba55adf336b45d3bcbe2e73d6. Jan 24 00:45:49.257148 containerd[1468]: time="2026-01-24T00:45:49.256862474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5xh9,Uid:1cfffd16-1ea2-4e5f-a1f6-6bc46e49044c,Namespace:kube-system,Attempt:0,} returns sandbox id \"26525898d7381b17f688bca679bf07d4e110425ba55adf336b45d3bcbe2e73d6\"" Jan 24 00:45:49.259436 systemd[1]: Created slice kubepods-besteffort-poda47c7cd8_c417_438a_9b89_c6dcc14d2f28.slice - libcontainer container kubepods-besteffort-poda47c7cd8_c417_438a_9b89_c6dcc14d2f28.slice. Jan 24 00:45:49.268479 containerd[1468]: time="2026-01-24T00:45:49.267788953Z" level=info msg="CreateContainer within sandbox \"26525898d7381b17f688bca679bf07d4e110425ba55adf336b45d3bcbe2e73d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:45:49.285666 kubelet[2637]: I0124 00:45:49.285616 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6x4\" (UniqueName: \"kubernetes.io/projected/a47c7cd8-c417-438a-9b89-c6dcc14d2f28-kube-api-access-ql6x4\") pod \"tigera-operator-7dcd859c48-9s265\" (UID: \"a47c7cd8-c417-438a-9b89-c6dcc14d2f28\") " pod="tigera-operator/tigera-operator-7dcd859c48-9s265" Jan 24 00:45:49.286118 kubelet[2637]: I0124 00:45:49.285697 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a47c7cd8-c417-438a-9b89-c6dcc14d2f28-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9s265\" (UID: \"a47c7cd8-c417-438a-9b89-c6dcc14d2f28\") " pod="tigera-operator/tigera-operator-7dcd859c48-9s265" Jan 24 00:45:49.298025 containerd[1468]: time="2026-01-24T00:45:49.297672207Z" level=info msg="CreateContainer within sandbox \"26525898d7381b17f688bca679bf07d4e110425ba55adf336b45d3bcbe2e73d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"90f1da8286da57eed2c8d3dc9b728b000e54214141042f8c7afdf60ea4cc5727\"" Jan 24 00:45:49.301699 containerd[1468]: time="2026-01-24T00:45:49.299713846Z" level=info msg="StartContainer for \"90f1da8286da57eed2c8d3dc9b728b000e54214141042f8c7afdf60ea4cc5727\"" Jan 24 00:45:49.338416 systemd[1]: Started cri-containerd-90f1da8286da57eed2c8d3dc9b728b000e54214141042f8c7afdf60ea4cc5727.scope - libcontainer container 90f1da8286da57eed2c8d3dc9b728b000e54214141042f8c7afdf60ea4cc5727. Jan 24 00:45:49.383641 containerd[1468]: time="2026-01-24T00:45:49.383566511Z" level=info msg="StartContainer for \"90f1da8286da57eed2c8d3dc9b728b000e54214141042f8c7afdf60ea4cc5727\" returns successfully" Jan 24 00:45:49.566877 containerd[1468]: time="2026-01-24T00:45:49.566830326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9s265,Uid:a47c7cd8-c417-438a-9b89-c6dcc14d2f28,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:45:49.598602 containerd[1468]: time="2026-01-24T00:45:49.598115283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:49.598981 containerd[1468]: time="2026-01-24T00:45:49.598278967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:49.598981 containerd[1468]: time="2026-01-24T00:45:49.598305457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.600523 containerd[1468]: time="2026-01-24T00:45:49.599691809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.624459 systemd[1]: Started cri-containerd-8f570b8f4b307f5f8efae1fbe17bf497c605536d05748745969bf6d9c5cd9b14.scope - libcontainer container 8f570b8f4b307f5f8efae1fbe17bf497c605536d05748745969bf6d9c5cd9b14. Jan 24 00:45:49.692046 containerd[1468]: time="2026-01-24T00:45:49.691975883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9s265,Uid:a47c7cd8-c417-438a-9b89-c6dcc14d2f28,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8f570b8f4b307f5f8efae1fbe17bf497c605536d05748745969bf6d9c5cd9b14\"" Jan 24 00:45:49.696925 containerd[1468]: time="2026-01-24T00:45:49.696877436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:45:50.362015 kubelet[2637]: I0124 00:45:50.361927 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5xh9" podStartSLOduration=2.361900707 podStartE2EDuration="2.361900707s" podCreationTimestamp="2026-01-24 00:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:50.146734803 +0000 UTC m=+8.249298654" watchObservedRunningTime="2026-01-24 00:45:50.361900707 +0000 UTC m=+8.464464551" Jan 24 00:45:50.597269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167040717.mount: Deactivated successfully. Jan 24 00:45:51.593468 containerd[1468]: time="2026-01-24T00:45:51.593396517Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:51.594906 containerd[1468]: time="2026-01-24T00:45:51.594675882Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:45:51.597217 containerd[1468]: time="2026-01-24T00:45:51.595938460Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:51.599663 containerd[1468]: time="2026-01-24T00:45:51.599615628Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:51.601143 containerd[1468]: time="2026-01-24T00:45:51.601089051Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.904160512s" Jan 24 00:45:51.601304 containerd[1468]: time="2026-01-24T00:45:51.601277522Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:45:51.606020 containerd[1468]: time="2026-01-24T00:45:51.605949993Z" level=info msg="CreateContainer within sandbox \"8f570b8f4b307f5f8efae1fbe17bf497c605536d05748745969bf6d9c5cd9b14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:45:51.621341 containerd[1468]: time="2026-01-24T00:45:51.621293188Z" level=info msg="CreateContainer within sandbox \"8f570b8f4b307f5f8efae1fbe17bf497c605536d05748745969bf6d9c5cd9b14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2d315e067aac031f4ff7a24a0677bc33b972186982c1da51244e6b753e44cf23\"" Jan 24 00:45:51.622528 containerd[1468]: time="2026-01-24T00:45:51.622366642Z" level=info msg="StartContainer for \"2d315e067aac031f4ff7a24a0677bc33b972186982c1da51244e6b753e44cf23\"" Jan 24 00:45:51.667071 systemd[1]: run-containerd-runc-k8s.io-2d315e067aac031f4ff7a24a0677bc33b972186982c1da51244e6b753e44cf23-runc.O8tKgx.mount: Deactivated successfully. Jan 24 00:45:51.682508 systemd[1]: Started cri-containerd-2d315e067aac031f4ff7a24a0677bc33b972186982c1da51244e6b753e44cf23.scope - libcontainer container 2d315e067aac031f4ff7a24a0677bc33b972186982c1da51244e6b753e44cf23. Jan 24 00:45:51.722949 containerd[1468]: time="2026-01-24T00:45:51.722892506Z" level=info msg="StartContainer for \"2d315e067aac031f4ff7a24a0677bc33b972186982c1da51244e6b753e44cf23\" returns successfully" Jan 24 00:45:54.502847 kubelet[2637]: I0124 00:45:54.502768 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9s265" podStartSLOduration=3.594245768 podStartE2EDuration="5.5027416s" podCreationTimestamp="2026-01-24 00:45:49 +0000 UTC" firstStartedPulling="2026-01-24 00:45:49.695169274 +0000 UTC m=+7.797733109" lastFinishedPulling="2026-01-24 00:45:51.603665104 +0000 UTC m=+9.706228941" observedRunningTime="2026-01-24 00:45:52.149977482 +0000 UTC m=+10.252541327" watchObservedRunningTime="2026-01-24 00:45:54.5027416 +0000 UTC m=+12.605305444" Jan 24 00:45:58.926999 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 24 00:45:58.961452 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:58.969084 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:45:58.972457 systemd[1]: sshd@6-10.128.0.29:22-4.153.228.146:42532.service: Deactivated successfully. Jan 24 00:45:58.979153 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:45:58.979838 systemd[1]: session-7.scope: Consumed 6.138s CPU time, 155.8M memory peak, 0B memory swap peak. Jan 24 00:45:58.981940 systemd-logind[1449]: Removed session 7. Jan 24 00:46:06.435904 systemd[1]: Created slice kubepods-besteffort-podb90eb77b_9172_413d_87bc_2832c874a683.slice - libcontainer container kubepods-besteffort-podb90eb77b_9172_413d_87bc_2832c874a683.slice. Jan 24 00:46:06.510476 kubelet[2637]: I0124 00:46:06.510410 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hz7x\" (UniqueName: \"kubernetes.io/projected/b90eb77b-9172-413d-87bc-2832c874a683-kube-api-access-5hz7x\") pod \"calico-typha-fb7d87d46-2jmwf\" (UID: \"b90eb77b-9172-413d-87bc-2832c874a683\") " pod="calico-system/calico-typha-fb7d87d46-2jmwf" Jan 24 00:46:06.510476 kubelet[2637]: I0124 00:46:06.510480 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b90eb77b-9172-413d-87bc-2832c874a683-tigera-ca-bundle\") pod \"calico-typha-fb7d87d46-2jmwf\" (UID: \"b90eb77b-9172-413d-87bc-2832c874a683\") " pod="calico-system/calico-typha-fb7d87d46-2jmwf" Jan 24 00:46:06.511447 kubelet[2637]: I0124 00:46:06.510507 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b90eb77b-9172-413d-87bc-2832c874a683-typha-certs\") pod \"calico-typha-fb7d87d46-2jmwf\" (UID: \"b90eb77b-9172-413d-87bc-2832c874a683\") " pod="calico-system/calico-typha-fb7d87d46-2jmwf" Jan 24 00:46:06.543755 systemd[1]: Created slice kubepods-besteffort-pod5637ba04_d60d_4ded_85aa_2b1aa0dbdf28.slice - libcontainer container kubepods-besteffort-pod5637ba04_d60d_4ded_85aa_2b1aa0dbdf28.slice. Jan 24 00:46:06.612219 kubelet[2637]: I0124 00:46:06.611372 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-cni-bin-dir\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612219 kubelet[2637]: I0124 00:46:06.611446 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-flexvol-driver-host\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612219 kubelet[2637]: I0124 00:46:06.611474 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-var-lib-calico\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612219 kubelet[2637]: I0124 00:46:06.611501 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqjt\" (UniqueName: \"kubernetes.io/projected/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-kube-api-access-xvqjt\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612219 kubelet[2637]: I0124 00:46:06.611559 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-cni-log-dir\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612605 kubelet[2637]: I0124 00:46:06.611582 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-policysync\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612605 kubelet[2637]: I0124 00:46:06.611609 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-lib-modules\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612605 kubelet[2637]: I0124 00:46:06.611632 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-tigera-ca-bundle\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612605 kubelet[2637]: I0124 00:46:06.611657 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-node-certs\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612605 kubelet[2637]: I0124 00:46:06.611681 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-var-run-calico\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612865 kubelet[2637]: I0124 00:46:06.611706 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-xtables-lock\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.612865 kubelet[2637]: I0124 00:46:06.611735 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5637ba04-d60d-4ded-85aa-2b1aa0dbdf28-cni-net-dir\") pod \"calico-node-9kcxc\" (UID: \"5637ba04-d60d-4ded-85aa-2b1aa0dbdf28\") " pod="calico-system/calico-node-9kcxc" Jan 24 00:46:06.703231 kubelet[2637]: E0124 00:46:06.701455 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:06.707511 kubelet[2637]: I0124 00:46:06.707466 2637 status_manager.go:890] "Failed to get status for pod" podUID="eb162143-7b21-40da-95af-2a95960643a6" pod="calico-system/csi-node-driver-k95gq" err="pods \"csi-node-driver-k95gq\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" Jan 24 00:46:06.715716 kubelet[2637]: E0124 00:46:06.715525 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.715716 kubelet[2637]: W0124 00:46:06.715552 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.715716 kubelet[2637]: E0124 00:46:06.715588 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.717631 kubelet[2637]: E0124 00:46:06.717306 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.717631 kubelet[2637]: W0124 00:46:06.717328 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.717631 kubelet[2637]: E0124 00:46:06.717347 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.718215 kubelet[2637]: E0124 00:46:06.717947 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.718215 kubelet[2637]: W0124 00:46:06.717980 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.718215 kubelet[2637]: E0124 00:46:06.717998 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.718953 kubelet[2637]: E0124 00:46:06.718415 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.718953 kubelet[2637]: W0124 00:46:06.718442 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.718953 kubelet[2637]: E0124 00:46:06.718460 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.721851 kubelet[2637]: E0124 00:46:06.721811 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.722056 kubelet[2637]: W0124 00:46:06.721991 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.723309 kubelet[2637]: E0124 00:46:06.722222 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.724282 kubelet[2637]: E0124 00:46:06.724152 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.724487 kubelet[2637]: W0124 00:46:06.724172 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.725356 kubelet[2637]: E0124 00:46:06.725271 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.725356 kubelet[2637]: W0124 00:46:06.725291 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.725356 kubelet[2637]: E0124 00:46:06.725310 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.726353 kubelet[2637]: E0124 00:46:06.725703 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.727699 kubelet[2637]: E0124 00:46:06.727671 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.728127 kubelet[2637]: W0124 00:46:06.728106 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.728326 kubelet[2637]: E0124 00:46:06.728213 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.730424 kubelet[2637]: E0124 00:46:06.730405 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.732801 kubelet[2637]: W0124 00:46:06.732734 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.732801 kubelet[2637]: E0124 00:46:06.732762 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.752668 kubelet[2637]: E0124 00:46:06.752546 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.752668 kubelet[2637]: W0124 00:46:06.752570 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.752668 kubelet[2637]: E0124 00:46:06.752612 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.759009 containerd[1468]: time="2026-01-24T00:46:06.757755457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb7d87d46-2jmwf,Uid:b90eb77b-9172-413d-87bc-2832c874a683,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:06.766174 kubelet[2637]: E0124 00:46:06.766078 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.766174 kubelet[2637]: W0124 00:46:06.766102 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.766174 kubelet[2637]: E0124 00:46:06.766126 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.790599 kubelet[2637]: E0124 00:46:06.790462 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.790599 kubelet[2637]: W0124 00:46:06.790490 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.790599 kubelet[2637]: E0124 00:46:06.790521 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.792470 kubelet[2637]: E0124 00:46:06.792296 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.792470 kubelet[2637]: W0124 00:46:06.792321 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.792470 kubelet[2637]: E0124 00:46:06.792345 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.794321 kubelet[2637]: E0124 00:46:06.792951 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.794321 kubelet[2637]: W0124 00:46:06.792970 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.794321 kubelet[2637]: E0124 00:46:06.792993 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.795042 kubelet[2637]: E0124 00:46:06.794825 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.795042 kubelet[2637]: W0124 00:46:06.794849 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.795042 kubelet[2637]: E0124 00:46:06.794873 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.795752 kubelet[2637]: E0124 00:46:06.795466 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.795752 kubelet[2637]: W0124 00:46:06.795483 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.795752 kubelet[2637]: E0124 00:46:06.795499 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.797955 kubelet[2637]: E0124 00:46:06.797745 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.797955 kubelet[2637]: W0124 00:46:06.797765 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.797955 kubelet[2637]: E0124 00:46:06.797790 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.798393 kubelet[2637]: E0124 00:46:06.798298 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.798393 kubelet[2637]: W0124 00:46:06.798315 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.798393 kubelet[2637]: E0124 00:46:06.798331 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.799041 kubelet[2637]: E0124 00:46:06.798884 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.799041 kubelet[2637]: W0124 00:46:06.798900 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.799041 kubelet[2637]: E0124 00:46:06.798919 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.800758 kubelet[2637]: E0124 00:46:06.800632 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.800758 kubelet[2637]: W0124 00:46:06.800672 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.800758 kubelet[2637]: E0124 00:46:06.800691 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.803211 kubelet[2637]: E0124 00:46:06.802441 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.803211 kubelet[2637]: W0124 00:46:06.802461 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.803211 kubelet[2637]: E0124 00:46:06.802501 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.804287 kubelet[2637]: E0124 00:46:06.803679 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.804287 kubelet[2637]: W0124 00:46:06.803697 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.804287 kubelet[2637]: E0124 00:46:06.803716 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.806904 kubelet[2637]: E0124 00:46:06.806665 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.806904 kubelet[2637]: W0124 00:46:06.806699 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.806904 kubelet[2637]: E0124 00:46:06.806718 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.811462 kubelet[2637]: E0124 00:46:06.811290 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.811462 kubelet[2637]: W0124 00:46:06.811311 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.811462 kubelet[2637]: E0124 00:46:06.811329 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.812712 kubelet[2637]: E0124 00:46:06.812361 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.812712 kubelet[2637]: W0124 00:46:06.812381 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.812712 kubelet[2637]: E0124 00:46:06.812401 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.815259 kubelet[2637]: E0124 00:46:06.814697 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.815259 kubelet[2637]: W0124 00:46:06.814719 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.815259 kubelet[2637]: E0124 00:46:06.814738 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.817309 kubelet[2637]: E0124 00:46:06.817124 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.817309 kubelet[2637]: W0124 00:46:06.817144 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.817309 kubelet[2637]: E0124 00:46:06.817164 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.818844 kubelet[2637]: E0124 00:46:06.818654 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.818844 kubelet[2637]: W0124 00:46:06.818673 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.818844 kubelet[2637]: E0124 00:46:06.818690 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.822415 kubelet[2637]: E0124 00:46:06.820545 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.822415 kubelet[2637]: W0124 00:46:06.820567 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.822415 kubelet[2637]: E0124 00:46:06.820587 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.823653 kubelet[2637]: E0124 00:46:06.823449 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.823653 kubelet[2637]: W0124 00:46:06.823469 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.823653 kubelet[2637]: E0124 00:46:06.823497 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.825108 kubelet[2637]: E0124 00:46:06.824945 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.825108 kubelet[2637]: W0124 00:46:06.824967 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.825108 kubelet[2637]: E0124 00:46:06.825074 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.828277 kubelet[2637]: E0124 00:46:06.827195 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.828277 kubelet[2637]: W0124 00:46:06.827227 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.828277 kubelet[2637]: E0124 00:46:06.827245 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.828277 kubelet[2637]: I0124 00:46:06.827294 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb162143-7b21-40da-95af-2a95960643a6-kubelet-dir\") pod \"csi-node-driver-k95gq\" (UID: \"eb162143-7b21-40da-95af-2a95960643a6\") " pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:06.828538 containerd[1468]: time="2026-01-24T00:46:06.827644836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:06.828538 containerd[1468]: time="2026-01-24T00:46:06.827717216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:06.828538 containerd[1468]: time="2026-01-24T00:46:06.827741799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:06.828538 containerd[1468]: time="2026-01-24T00:46:06.827859314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:06.831156 kubelet[2637]: E0124 00:46:06.829164 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.831156 kubelet[2637]: W0124 00:46:06.829205 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.831156 kubelet[2637]: E0124 00:46:06.829239 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.831156 kubelet[2637]: I0124 00:46:06.829273 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vs5\" (UniqueName: \"kubernetes.io/projected/eb162143-7b21-40da-95af-2a95960643a6-kube-api-access-72vs5\") pod \"csi-node-driver-k95gq\" (UID: \"eb162143-7b21-40da-95af-2a95960643a6\") " pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:06.831156 kubelet[2637]: E0124 00:46:06.831034 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.831156 kubelet[2637]: W0124 00:46:06.831050 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.831822 kubelet[2637]: E0124 00:46:06.831546 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.831822 kubelet[2637]: I0124 00:46:06.831590 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb162143-7b21-40da-95af-2a95960643a6-varrun\") pod \"csi-node-driver-k95gq\" (UID: \"eb162143-7b21-40da-95af-2a95960643a6\") " pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:06.832154 kubelet[2637]: E0124 00:46:06.832012 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.832154 kubelet[2637]: W0124 00:46:06.832033 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.834484 kubelet[2637]: E0124 00:46:06.834219 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.834484 kubelet[2637]: E0124 00:46:06.834328 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.834484 kubelet[2637]: W0124 00:46:06.834344 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.834892 kubelet[2637]: E0124 00:46:06.834722 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.837458 kubelet[2637]: E0124 00:46:06.837222 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.837458 kubelet[2637]: W0124 00:46:06.837247 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.837458 kubelet[2637]: E0124 00:46:06.837379 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.837458 kubelet[2637]: I0124 00:46:06.837421 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb162143-7b21-40da-95af-2a95960643a6-registration-dir\") pod \"csi-node-driver-k95gq\" (UID: \"eb162143-7b21-40da-95af-2a95960643a6\") " pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:06.840055 kubelet[2637]: E0124 00:46:06.837798 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.840055 kubelet[2637]: W0124 00:46:06.837815 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.840055 kubelet[2637]: E0124 00:46:06.839112 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.840055 kubelet[2637]: E0124 00:46:06.839255 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.840055 kubelet[2637]: W0124 00:46:06.839269 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.840055 kubelet[2637]: E0124 00:46:06.839285 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.840055 kubelet[2637]: E0124 00:46:06.839676 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.840055 kubelet[2637]: W0124 00:46:06.839693 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.840055 kubelet[2637]: E0124 00:46:06.839726 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.840592 kubelet[2637]: I0124 00:46:06.839756 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb162143-7b21-40da-95af-2a95960643a6-socket-dir\") pod \"csi-node-driver-k95gq\" (UID: \"eb162143-7b21-40da-95af-2a95960643a6\") " pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:06.840951 kubelet[2637]: E0124 00:46:06.840912 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.840951 kubelet[2637]: W0124 00:46:06.840930 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.841289 kubelet[2637]: E0124 00:46:06.841271 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.841733 kubelet[2637]: E0124 00:46:06.841713 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.841940 kubelet[2637]: W0124 00:46:06.841850 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.842097 kubelet[2637]: E0124 00:46:06.842055 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.842621 kubelet[2637]: E0124 00:46:06.842603 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.843595 kubelet[2637]: W0124 00:46:06.842719 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.844321 kubelet[2637]: E0124 00:46:06.844301 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.844785 kubelet[2637]: E0124 00:46:06.844747 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.844785 kubelet[2637]: W0124 00:46:06.844776 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.844960 kubelet[2637]: E0124 00:46:06.844794 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.846313 kubelet[2637]: E0124 00:46:06.845095 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.846313 kubelet[2637]: W0124 00:46:06.845116 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.846313 kubelet[2637]: E0124 00:46:06.845132 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.846313 kubelet[2637]: E0124 00:46:06.845477 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.846313 kubelet[2637]: W0124 00:46:06.845491 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.846313 kubelet[2637]: E0124 00:46:06.845505 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.853614 containerd[1468]: time="2026-01-24T00:46:06.853468724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kcxc,Uid:5637ba04-d60d-4ded-85aa-2b1aa0dbdf28,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:06.899480 systemd[1]: Started cri-containerd-cf748865a6af7ffe1333fd1c2dc8eb26454e1684ec4d7c0385ff6b96dad64997.scope - libcontainer container cf748865a6af7ffe1333fd1c2dc8eb26454e1684ec4d7c0385ff6b96dad64997. Jan 24 00:46:06.945592 kubelet[2637]: E0124 00:46:06.945551 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.945847 kubelet[2637]: W0124 00:46:06.945811 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.946536 kubelet[2637]: E0124 00:46:06.945955 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.947589 kubelet[2637]: E0124 00:46:06.947566 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.947736 kubelet[2637]: W0124 00:46:06.947716 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.948087 kubelet[2637]: E0124 00:46:06.947942 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.948860 kubelet[2637]: E0124 00:46:06.948825 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.948969 kubelet[2637]: W0124 00:46:06.948862 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.948969 kubelet[2637]: E0124 00:46:06.948903 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.949634 kubelet[2637]: E0124 00:46:06.949608 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.949634 kubelet[2637]: W0124 00:46:06.949633 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.949782 kubelet[2637]: E0124 00:46:06.949657 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.950694 kubelet[2637]: E0124 00:46:06.950351 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.950694 kubelet[2637]: W0124 00:46:06.950392 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.950694 kubelet[2637]: E0124 00:46:06.950415 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.952310 containerd[1468]: time="2026-01-24T00:46:06.951919844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:06.952380 containerd[1468]: time="2026-01-24T00:46:06.952131775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:06.952442 containerd[1468]: time="2026-01-24T00:46:06.952343259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:06.953826 kubelet[2637]: E0124 00:46:06.953282 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.953826 kubelet[2637]: W0124 00:46:06.953305 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.954032 containerd[1468]: time="2026-01-24T00:46:06.952981966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:06.957000 kubelet[2637]: E0124 00:46:06.956973 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.961393 kubelet[2637]: E0124 00:46:06.959298 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.961393 kubelet[2637]: W0124 00:46:06.959329 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.961393 kubelet[2637]: E0124 00:46:06.959361 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.961393 kubelet[2637]: E0124 00:46:06.960131 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.961393 kubelet[2637]: W0124 00:46:06.960148 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.961393 kubelet[2637]: E0124 00:46:06.960411 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.961393 kubelet[2637]: E0124 00:46:06.960928 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.961393 kubelet[2637]: W0124 00:46:06.960946 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.961393 kubelet[2637]: E0124 00:46:06.961146 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.969074 kubelet[2637]: E0124 00:46:06.967991 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.969074 kubelet[2637]: W0124 00:46:06.968021 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.969074 kubelet[2637]: E0124 00:46:06.968804 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.970538 kubelet[2637]: E0124 00:46:06.970169 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.972207 kubelet[2637]: W0124 00:46:06.971146 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.972858 kubelet[2637]: E0124 00:46:06.972824 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.975850 kubelet[2637]: E0124 00:46:06.975334 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.975850 kubelet[2637]: W0124 00:46:06.975357 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.976102 kubelet[2637]: E0124 00:46:06.976041 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.976426 kubelet[2637]: E0124 00:46:06.976405 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.976822 kubelet[2637]: W0124 00:46:06.976427 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.976822 kubelet[2637]: E0124 00:46:06.976542 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.976982 kubelet[2637]: E0124 00:46:06.976835 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.976982 kubelet[2637]: W0124 00:46:06.976850 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.977610 kubelet[2637]: E0124 00:46:06.977369 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.979353 kubelet[2637]: E0124 00:46:06.979273 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.979353 kubelet[2637]: W0124 00:46:06.979297 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.979976 kubelet[2637]: E0124 00:46:06.979871 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.980431 kubelet[2637]: E0124 00:46:06.980390 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.980529 kubelet[2637]: W0124 00:46:06.980433 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.980529 kubelet[2637]: E0124 00:46:06.980519 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.983440 kubelet[2637]: E0124 00:46:06.982089 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.983440 kubelet[2637]: W0124 00:46:06.982112 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.983440 kubelet[2637]: E0124 00:46:06.982649 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.984326 kubelet[2637]: E0124 00:46:06.984301 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.984414 kubelet[2637]: W0124 00:46:06.984348 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.984489 kubelet[2637]: E0124 00:46:06.984440 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.984815 kubelet[2637]: E0124 00:46:06.984794 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.984815 kubelet[2637]: W0124 00:46:06.984815 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.985226 kubelet[2637]: E0124 00:46:06.985160 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.988039 kubelet[2637]: E0124 00:46:06.987260 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.988039 kubelet[2637]: W0124 00:46:06.987283 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.988039 kubelet[2637]: E0124 00:46:06.987420 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.988039 kubelet[2637]: E0124 00:46:06.987799 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.988039 kubelet[2637]: W0124 00:46:06.987857 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.988464 kubelet[2637]: E0124 00:46:06.988434 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.989313 kubelet[2637]: E0124 00:46:06.989290 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.989313 kubelet[2637]: W0124 00:46:06.989311 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.990943 kubelet[2637]: E0124 00:46:06.990274 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.991240 kubelet[2637]: E0124 00:46:06.990778 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.991344 kubelet[2637]: W0124 00:46:06.991247 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.991680 kubelet[2637]: E0124 00:46:06.991617 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.993739 kubelet[2637]: E0124 00:46:06.993719 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.993955 kubelet[2637]: W0124 00:46:06.993879 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.993955 kubelet[2637]: E0124 00:46:06.993911 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:06.994995 kubelet[2637]: E0124 00:46:06.994902 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:06.994995 kubelet[2637]: W0124 00:46:06.994927 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:06.994995 kubelet[2637]: E0124 00:46:06.994958 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:07.024525 kubelet[2637]: E0124 00:46:07.024397 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:46:07.024525 kubelet[2637]: W0124 00:46:07.024581 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:46:07.025204 kubelet[2637]: E0124 00:46:07.024621 2637 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:46:07.037670 systemd[1]: Started cri-containerd-bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e.scope - libcontainer container bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e. Jan 24 00:46:07.088521 containerd[1468]: time="2026-01-24T00:46:07.088317556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kcxc,Uid:5637ba04-d60d-4ded-85aa-2b1aa0dbdf28,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\"" Jan 24 00:46:07.098295 containerd[1468]: time="2026-01-24T00:46:07.098252173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:46:07.110738 containerd[1468]: time="2026-01-24T00:46:07.110237963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb7d87d46-2jmwf,Uid:b90eb77b-9172-413d-87bc-2832c874a683,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf748865a6af7ffe1333fd1c2dc8eb26454e1684ec4d7c0385ff6b96dad64997\"" Jan 24 00:46:08.021718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544342519.mount: Deactivated successfully. Jan 24 00:46:08.086731 kubelet[2637]: E0124 00:46:08.086640 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:08.152989 containerd[1468]: time="2026-01-24T00:46:08.152923189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:08.154401 containerd[1468]: time="2026-01-24T00:46:08.154335797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 00:46:08.156214 containerd[1468]: time="2026-01-24T00:46:08.155726055Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:08.159896 containerd[1468]: time="2026-01-24T00:46:08.159266489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:08.162356 containerd[1468]: time="2026-01-24T00:46:08.162310629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.064001393s" Jan 24 00:46:08.162467 containerd[1468]: time="2026-01-24T00:46:08.162362488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:46:08.170619 containerd[1468]: time="2026-01-24T00:46:08.170166427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:46:08.172675 containerd[1468]: time="2026-01-24T00:46:08.172615991Z" level=info msg="CreateContainer within sandbox \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:46:08.193663 containerd[1468]: time="2026-01-24T00:46:08.193610858Z" level=info msg="CreateContainer within sandbox \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703\"" Jan 24 00:46:08.194433 containerd[1468]: time="2026-01-24T00:46:08.194347147Z" level=info msg="StartContainer for \"7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703\"" Jan 24 00:46:08.236404 systemd[1]: Started cri-containerd-7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703.scope - libcontainer container 7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703. Jan 24 00:46:08.284765 containerd[1468]: time="2026-01-24T00:46:08.284595627Z" level=info msg="StartContainer for \"7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703\" returns successfully" Jan 24 00:46:08.296343 systemd[1]: cri-containerd-7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703.scope: Deactivated successfully. Jan 24 00:46:08.620699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703-rootfs.mount: Deactivated successfully. Jan 24 00:46:08.687937 containerd[1468]: time="2026-01-24T00:46:08.687842349Z" level=info msg="shim disconnected" id=7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703 namespace=k8s.io Jan 24 00:46:08.688395 containerd[1468]: time="2026-01-24T00:46:08.688024766Z" level=warning msg="cleaning up after shim disconnected" id=7c331c9f88f90efa78d14218268b4d7d710d3f1b8030ebd79a7be5c9eb1bf703 namespace=k8s.io Jan 24 00:46:08.688395 containerd[1468]: time="2026-01-24T00:46:08.688044429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:46:10.087222 kubelet[2637]: E0124 00:46:10.087151 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:10.454022 containerd[1468]: time="2026-01-24T00:46:10.453860968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:10.455746 containerd[1468]: time="2026-01-24T00:46:10.455524419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 24 00:46:10.458205 containerd[1468]: time="2026-01-24T00:46:10.456999398Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:10.460058 containerd[1468]: time="2026-01-24T00:46:10.460013090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:10.461089 containerd[1468]: time="2026-01-24T00:46:10.461045527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.290800962s" Jan 24 00:46:10.461297 containerd[1468]: time="2026-01-24T00:46:10.461254676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:46:10.462954 containerd[1468]: time="2026-01-24T00:46:10.462921209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:46:10.483789 containerd[1468]: time="2026-01-24T00:46:10.483740814Z" level=info msg="CreateContainer within sandbox \"cf748865a6af7ffe1333fd1c2dc8eb26454e1684ec4d7c0385ff6b96dad64997\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:46:10.503980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781749891.mount: Deactivated successfully. Jan 24 00:46:10.508099 containerd[1468]: time="2026-01-24T00:46:10.507892695Z" level=info msg="CreateContainer within sandbox \"cf748865a6af7ffe1333fd1c2dc8eb26454e1684ec4d7c0385ff6b96dad64997\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8da2d3117069fb7e79353073c7cb911a05f671daccadb3495de451d985c42472\"" Jan 24 00:46:10.508996 containerd[1468]: time="2026-01-24T00:46:10.508933723Z" level=info msg="StartContainer for \"8da2d3117069fb7e79353073c7cb911a05f671daccadb3495de451d985c42472\"" Jan 24 00:46:10.563437 systemd[1]: Started cri-containerd-8da2d3117069fb7e79353073c7cb911a05f671daccadb3495de451d985c42472.scope - libcontainer container 8da2d3117069fb7e79353073c7cb911a05f671daccadb3495de451d985c42472. Jan 24 00:46:10.624798 containerd[1468]: time="2026-01-24T00:46:10.624750524Z" level=info msg="StartContainer for \"8da2d3117069fb7e79353073c7cb911a05f671daccadb3495de451d985c42472\" returns successfully" Jan 24 00:46:12.087935 kubelet[2637]: E0124 00:46:12.087509 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:12.205439 kubelet[2637]: I0124 00:46:12.204731 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:46:13.733135 containerd[1468]: time="2026-01-24T00:46:13.733070310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:13.734603 containerd[1468]: time="2026-01-24T00:46:13.734298083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:46:13.737038 containerd[1468]: time="2026-01-24T00:46:13.735788292Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:13.738649 containerd[1468]: time="2026-01-24T00:46:13.738613059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:13.739598 containerd[1468]: time="2026-01-24T00:46:13.739553561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.276458071s" Jan 24 00:46:13.739710 containerd[1468]: time="2026-01-24T00:46:13.739604291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:46:13.744364 containerd[1468]: time="2026-01-24T00:46:13.744325938Z" level=info msg="CreateContainer within sandbox \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:46:13.766709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981543767.mount: Deactivated successfully. Jan 24 00:46:13.769030 containerd[1468]: time="2026-01-24T00:46:13.768937216Z" level=info msg="CreateContainer within sandbox \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2\"" Jan 24 00:46:13.769911 containerd[1468]: time="2026-01-24T00:46:13.769650995Z" level=info msg="StartContainer for \"e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2\"" Jan 24 00:46:13.819543 systemd[1]: Started cri-containerd-e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2.scope - libcontainer container e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2. Jan 24 00:46:13.865114 containerd[1468]: time="2026-01-24T00:46:13.864612400Z" level=info msg="StartContainer for \"e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2\" returns successfully" Jan 24 00:46:14.087915 kubelet[2637]: E0124 00:46:14.086955 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:14.241716 kubelet[2637]: I0124 00:46:14.241590 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fb7d87d46-2jmwf" podStartSLOduration=4.892134175 podStartE2EDuration="8.241565478s" podCreationTimestamp="2026-01-24 00:46:06 +0000 UTC" firstStartedPulling="2026-01-24 00:46:07.112986017 +0000 UTC m=+25.215549851" lastFinishedPulling="2026-01-24 00:46:10.46241732 +0000 UTC m=+28.564981154" observedRunningTime="2026-01-24 00:46:11.220308314 +0000 UTC m=+29.322872151" watchObservedRunningTime="2026-01-24 00:46:14.241565478 +0000 UTC m=+32.344129321" Jan 24 00:46:14.869507 containerd[1468]: time="2026-01-24T00:46:14.869400509Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:46:14.872604 systemd[1]: cri-containerd-e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2.scope: Deactivated successfully. Jan 24 00:46:14.907140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2-rootfs.mount: Deactivated successfully. Jan 24 00:46:14.938263 kubelet[2637]: I0124 00:46:14.938226 2637 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:46:14.986982 kubelet[2637]: W0124 00:46:14.986663 2637 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:14.986982 kubelet[2637]: E0124 00:46:14.986725 2637 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:14.986982 kubelet[2637]: I0124 00:46:14.986805 2637 status_manager.go:890] "Failed to get status for pod" podUID="9897b791-32b8-489d-b2bb-407f3c85a8e0" pod="kube-system/coredns-668d6bf9bc-trwx8" err="pods \"coredns-668d6bf9bc-trwx8\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" Jan 24 00:46:14.996473 kubelet[2637]: I0124 00:46:14.996250 2637 status_manager.go:890] "Failed to get status for pod" podUID="9897b791-32b8-489d-b2bb-407f3c85a8e0" pod="kube-system/coredns-668d6bf9bc-trwx8" err="pods \"coredns-668d6bf9bc-trwx8\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" Jan 24 00:46:14.996473 kubelet[2637]: W0124 00:46:14.996389 2637 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:14.996473 kubelet[2637]: E0124 00:46:14.996430 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:14.996806 kubelet[2637]: W0124 00:46:14.996527 2637 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:14.996806 kubelet[2637]: E0124 00:46:14.996554 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:14.999281 systemd[1]: Created slice kubepods-burstable-pod9897b791_32b8_489d_b2bb_407f3c85a8e0.slice - libcontainer container kubepods-burstable-pod9897b791_32b8_489d_b2bb_407f3c85a8e0.slice. Jan 24 00:46:15.023079 systemd[1]: Created slice kubepods-besteffort-pod2872f4e1_5630_449e_b234_4abce1bebc05.slice - libcontainer container kubepods-besteffort-pod2872f4e1_5630_449e_b234_4abce1bebc05.slice. Jan 24 00:46:15.027319 kubelet[2637]: W0124 00:46:15.026760 2637 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:15.027319 kubelet[2637]: E0124 00:46:15.026834 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:15.039881 systemd[1]: Created slice kubepods-burstable-podbc52092c_b025_48a8_bd58_59cac1d3f427.slice - libcontainer container kubepods-burstable-podbc52092c_b025_48a8_bd58_59cac1d3f427.slice. Jan 24 00:46:15.042328 kubelet[2637]: I0124 00:46:15.041431 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9897b791-32b8-489d-b2bb-407f3c85a8e0-config-volume\") pod \"coredns-668d6bf9bc-trwx8\" (UID: \"9897b791-32b8-489d-b2bb-407f3c85a8e0\") " pod="kube-system/coredns-668d6bf9bc-trwx8" Jan 24 00:46:15.043942 kubelet[2637]: I0124 00:46:15.042593 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52ff\" (UniqueName: \"kubernetes.io/projected/9897b791-32b8-489d-b2bb-407f3c85a8e0-kube-api-access-g52ff\") pod \"coredns-668d6bf9bc-trwx8\" (UID: \"9897b791-32b8-489d-b2bb-407f3c85a8e0\") " pod="kube-system/coredns-668d6bf9bc-trwx8" Jan 24 00:46:15.043942 kubelet[2637]: W0124 00:46:15.042779 2637 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:15.045290 kubelet[2637]: E0124 00:46:15.042827 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:15.045290 kubelet[2637]: W0124 00:46:15.044576 2637 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:15.045290 kubelet[2637]: E0124 00:46:15.044611 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:15.045290 kubelet[2637]: W0124 00:46:15.044684 2637 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:15.045995 kubelet[2637]: E0124 00:46:15.044707 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:15.045995 kubelet[2637]: W0124 00:46:15.044791 2637 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object Jan 24 00:46:15.045995 kubelet[2637]: E0124 00:46:15.044813 2637 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' and this object" logger="UnhandledError" Jan 24 00:46:15.055116 systemd[1]: Created slice kubepods-besteffort-pod8d0e50ad_79cd_460b_a113_c524281d7733.slice - libcontainer container kubepods-besteffort-pod8d0e50ad_79cd_460b_a113_c524281d7733.slice. Jan 24 00:46:15.074837 systemd[1]: Created slice kubepods-besteffort-pod10921630_8357_43e3_be35_29668acdc0c4.slice - libcontainer container kubepods-besteffort-pod10921630_8357_43e3_be35_29668acdc0c4.slice. Jan 24 00:46:15.090956 systemd[1]: Created slice kubepods-besteffort-pod35a4077d_452d_4ef7_8393_2463352fe219.slice - libcontainer container kubepods-besteffort-pod35a4077d_452d_4ef7_8393_2463352fe219.slice. Jan 24 00:46:15.103907 systemd[1]: Created slice kubepods-besteffort-podc8ef6693_42b2_4015_8f31_4aeadd5a6288.slice - libcontainer container kubepods-besteffort-podc8ef6693_42b2_4015_8f31_4aeadd5a6288.slice. Jan 24 00:46:15.144873 kubelet[2637]: I0124 00:46:15.143822 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwcqt\" (UniqueName: \"kubernetes.io/projected/c8ef6693-42b2-4015-8f31-4aeadd5a6288-kube-api-access-nwcqt\") pod \"calico-apiserver-7b6b4f6b7-r8txm\" (UID: \"c8ef6693-42b2-4015-8f31-4aeadd5a6288\") " pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" Jan 24 00:46:15.144873 kubelet[2637]: I0124 00:46:15.143900 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlrsq\" (UniqueName: \"kubernetes.io/projected/35a4077d-452d-4ef7-8393-2463352fe219-kube-api-access-zlrsq\") pod \"calico-apiserver-7b6b4f6b7-w9z5c\" (UID: \"35a4077d-452d-4ef7-8393-2463352fe219\") " pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" Jan 24 00:46:15.144873 kubelet[2637]: I0124 00:46:15.143934 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10921630-8357-43e3-be35-29668acdc0c4-config\") pod \"goldmane-666569f655-pshxz\" (UID: \"10921630-8357-43e3-be35-29668acdc0c4\") " pod="calico-system/goldmane-666569f655-pshxz" Jan 24 00:46:15.144873 kubelet[2637]: I0124 00:46:15.143966 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-backend-key-pair\") pod \"whisker-66496bc8f5-xm522\" (UID: \"2872f4e1-5630-449e-b234-4abce1bebc05\") " pod="calico-system/whisker-66496bc8f5-xm522" Jan 24 00:46:15.144873 kubelet[2637]: I0124 00:46:15.143998 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10921630-8357-43e3-be35-29668acdc0c4-goldmane-ca-bundle\") pod \"goldmane-666569f655-pshxz\" (UID: \"10921630-8357-43e3-be35-29668acdc0c4\") " pod="calico-system/goldmane-666569f655-pshxz" Jan 24 00:46:15.145704 kubelet[2637]: I0124 00:46:15.144028 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/10921630-8357-43e3-be35-29668acdc0c4-goldmane-key-pair\") pod \"goldmane-666569f655-pshxz\" (UID: \"10921630-8357-43e3-be35-29668acdc0c4\") " pod="calico-system/goldmane-666569f655-pshxz" Jan 24 00:46:15.145704 kubelet[2637]: I0124 00:46:15.144057 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgcxx\" (UniqueName: \"kubernetes.io/projected/2872f4e1-5630-449e-b234-4abce1bebc05-kube-api-access-pgcxx\") pod \"whisker-66496bc8f5-xm522\" (UID: \"2872f4e1-5630-449e-b234-4abce1bebc05\") " pod="calico-system/whisker-66496bc8f5-xm522" Jan 24 00:46:15.145704 kubelet[2637]: I0124 00:46:15.144103 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc52092c-b025-48a8-bd58-59cac1d3f427-config-volume\") pod \"coredns-668d6bf9bc-8ln7b\" (UID: \"bc52092c-b025-48a8-bd58-59cac1d3f427\") " pod="kube-system/coredns-668d6bf9bc-8ln7b" Jan 24 00:46:15.145704 kubelet[2637]: I0124 00:46:15.144142 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wl7\" (UniqueName: \"kubernetes.io/projected/10921630-8357-43e3-be35-29668acdc0c4-kube-api-access-h8wl7\") pod \"goldmane-666569f655-pshxz\" (UID: \"10921630-8357-43e3-be35-29668acdc0c4\") " pod="calico-system/goldmane-666569f655-pshxz" Jan 24 00:46:15.145704 kubelet[2637]: I0124 00:46:15.144171 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-ca-bundle\") pod \"whisker-66496bc8f5-xm522\" (UID: \"2872f4e1-5630-449e-b234-4abce1bebc05\") " pod="calico-system/whisker-66496bc8f5-xm522" Jan 24 00:46:15.146017 kubelet[2637]: I0124 00:46:15.144234 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8ef6693-42b2-4015-8f31-4aeadd5a6288-calico-apiserver-certs\") pod \"calico-apiserver-7b6b4f6b7-r8txm\" (UID: \"c8ef6693-42b2-4015-8f31-4aeadd5a6288\") " pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" Jan 24 00:46:15.146017 kubelet[2637]: I0124 00:46:15.144263 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrcn\" (UniqueName: \"kubernetes.io/projected/bc52092c-b025-48a8-bd58-59cac1d3f427-kube-api-access-4lrcn\") pod \"coredns-668d6bf9bc-8ln7b\" (UID: \"bc52092c-b025-48a8-bd58-59cac1d3f427\") " pod="kube-system/coredns-668d6bf9bc-8ln7b" Jan 24 00:46:15.146017 kubelet[2637]: I0124 00:46:15.144320 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35a4077d-452d-4ef7-8393-2463352fe219-calico-apiserver-certs\") pod \"calico-apiserver-7b6b4f6b7-w9z5c\" (UID: \"35a4077d-452d-4ef7-8393-2463352fe219\") " pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" Jan 24 00:46:15.146017 kubelet[2637]: I0124 00:46:15.144353 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d0e50ad-79cd-460b-a113-c524281d7733-tigera-ca-bundle\") pod \"calico-kube-controllers-d8cb4b6c4-b426z\" (UID: \"8d0e50ad-79cd-460b-a113-c524281d7733\") " pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" Jan 24 00:46:15.146017 kubelet[2637]: I0124 00:46:15.144405 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc42n\" (UniqueName: \"kubernetes.io/projected/8d0e50ad-79cd-460b-a113-c524281d7733-kube-api-access-lc42n\") pod \"calico-kube-controllers-d8cb4b6c4-b426z\" (UID: \"8d0e50ad-79cd-460b-a113-c524281d7733\") " pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" Jan 24 00:46:15.367122 containerd[1468]: time="2026-01-24T00:46:15.366578053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8cb4b6c4-b426z,Uid:8d0e50ad-79cd-460b-a113-c524281d7733,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:15.793836 containerd[1468]: time="2026-01-24T00:46:15.792122549Z" level=info msg="shim disconnected" id=e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2 namespace=k8s.io Jan 24 00:46:15.793836 containerd[1468]: time="2026-01-24T00:46:15.792234087Z" level=warning msg="cleaning up after shim disconnected" id=e2572118051cf254c8baa6891e4276a8229d29a4a840a1ccfcc08ad285da41c2 namespace=k8s.io Jan 24 00:46:15.793836 containerd[1468]: time="2026-01-24T00:46:15.792252048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:46:15.877410 containerd[1468]: time="2026-01-24T00:46:15.877343607Z" level=error msg="Failed to destroy network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:15.877999 containerd[1468]: time="2026-01-24T00:46:15.877844292Z" level=error msg="encountered an error cleaning up failed sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:15.877999 containerd[1468]: time="2026-01-24T00:46:15.877925719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8cb4b6c4-b426z,Uid:8d0e50ad-79cd-460b-a113-c524281d7733,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:15.878862 kubelet[2637]: E0124 00:46:15.878374 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:15.878862 kubelet[2637]: E0124 00:46:15.878460 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" Jan 24 00:46:15.878862 kubelet[2637]: E0124 00:46:15.878487 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" Jan 24 00:46:15.879386 kubelet[2637]: E0124 00:46:15.878533 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d8cb4b6c4-b426z_calico-system(8d0e50ad-79cd-460b-a113-c524281d7733)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d8cb4b6c4-b426z_calico-system(8d0e50ad-79cd-460b-a113-c524281d7733)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:16.094469 systemd[1]: Created slice kubepods-besteffort-podeb162143_7b21_40da_95af_2a95960643a6.slice - libcontainer container kubepods-besteffort-podeb162143_7b21_40da_95af_2a95960643a6.slice. Jan 24 00:46:16.099139 containerd[1468]: time="2026-01-24T00:46:16.099078150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k95gq,Uid:eb162143-7b21-40da-95af-2a95960643a6,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:16.214065 containerd[1468]: time="2026-01-24T00:46:16.212860953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trwx8,Uid:9897b791-32b8-489d-b2bb-407f3c85a8e0,Namespace:kube-system,Attempt:0,}" Jan 24 00:46:16.228003 containerd[1468]: time="2026-01-24T00:46:16.227874582Z" level=error msg="Failed to destroy network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.230447 containerd[1468]: time="2026-01-24T00:46:16.230405652Z" level=error msg="encountered an error cleaning up failed sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.232297 containerd[1468]: time="2026-01-24T00:46:16.231390120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k95gq,Uid:eb162143-7b21-40da-95af-2a95960643a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.232297 containerd[1468]: time="2026-01-24T00:46:16.231923351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:46:16.232476 kubelet[2637]: E0124 00:46:16.232372 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.232476 kubelet[2637]: E0124 00:46:16.232423 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:16.232476 kubelet[2637]: E0124 00:46:16.232455 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k95gq" Jan 24 00:46:16.235927 kubelet[2637]: E0124 00:46:16.233041 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:16.235927 kubelet[2637]: I0124 00:46:16.235071 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:16.236485 containerd[1468]: time="2026-01-24T00:46:16.236296828Z" level=info msg="StopPodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\"" Jan 24 00:46:16.237559 containerd[1468]: time="2026-01-24T00:46:16.236514124Z" level=info msg="Ensure that sandbox fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be in task-service has been cleanup successfully" Jan 24 00:46:16.245447 kubelet[2637]: E0124 00:46:16.245170 2637 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 24 00:46:16.245447 kubelet[2637]: E0124 00:46:16.245300 2637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-backend-key-pair podName:2872f4e1-5630-449e-b234-4abce1bebc05 nodeName:}" failed. No retries permitted until 2026-01-24 00:46:16.745258325 +0000 UTC m=+34.847822167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-backend-key-pair") pod "whisker-66496bc8f5-xm522" (UID: "2872f4e1-5630-449e-b234-4abce1bebc05") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:46:16.245447 kubelet[2637]: E0124 00:46:16.245345 2637 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:46:16.245447 kubelet[2637]: E0124 00:46:16.245397 2637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10921630-8357-43e3-be35-29668acdc0c4-goldmane-ca-bundle podName:10921630-8357-43e3-be35-29668acdc0c4 nodeName:}" failed. No retries permitted until 2026-01-24 00:46:16.745382179 +0000 UTC m=+34.847946014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/10921630-8357-43e3-be35-29668acdc0c4-goldmane-ca-bundle") pod "goldmane-666569f655-pshxz" (UID: "10921630-8357-43e3-be35-29668acdc0c4") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:46:16.247933 kubelet[2637]: E0124 00:46:16.247903 2637 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:46:16.249485 kubelet[2637]: E0124 00:46:16.249116 2637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/10921630-8357-43e3-be35-29668acdc0c4-config podName:10921630-8357-43e3-be35-29668acdc0c4 nodeName:}" failed. No retries permitted until 2026-01-24 00:46:16.749078643 +0000 UTC m=+34.851642482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/10921630-8357-43e3-be35-29668acdc0c4-config") pod "goldmane-666569f655-pshxz" (UID: "10921630-8357-43e3-be35-29668acdc0c4") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:46:16.265313 containerd[1468]: time="2026-01-24T00:46:16.264794786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ln7b,Uid:bc52092c-b025-48a8-bd58-59cac1d3f427,Namespace:kube-system,Attempt:0,}" Jan 24 00:46:16.306791 containerd[1468]: time="2026-01-24T00:46:16.306736220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-w9z5c,Uid:35a4077d-452d-4ef7-8393-2463352fe219,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:46:16.313260 containerd[1468]: time="2026-01-24T00:46:16.313207238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-r8txm,Uid:c8ef6693-42b2-4015-8f31-4aeadd5a6288,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:46:16.354400 containerd[1468]: time="2026-01-24T00:46:16.352974683Z" level=error msg="StopPodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" failed" error="failed to destroy network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.355467 kubelet[2637]: E0124 00:46:16.354874 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:16.355467 kubelet[2637]: E0124 00:46:16.354965 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be"} Jan 24 00:46:16.355467 kubelet[2637]: E0124 00:46:16.355047 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d0e50ad-79cd-460b-a113-c524281d7733\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:16.355467 kubelet[2637]: E0124 00:46:16.355348 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d0e50ad-79cd-460b-a113-c524281d7733\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:16.427424 containerd[1468]: time="2026-01-24T00:46:16.427346148Z" level=error msg="Failed to destroy network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.428141 containerd[1468]: time="2026-01-24T00:46:16.427946528Z" level=error msg="encountered an error cleaning up failed sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.428141 containerd[1468]: time="2026-01-24T00:46:16.428064036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trwx8,Uid:9897b791-32b8-489d-b2bb-407f3c85a8e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.429059 kubelet[2637]: E0124 00:46:16.428410 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.429059 kubelet[2637]: E0124 00:46:16.428476 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-trwx8" Jan 24 00:46:16.429059 kubelet[2637]: E0124 00:46:16.428512 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-trwx8" Jan 24 00:46:16.429310 kubelet[2637]: E0124 00:46:16.428578 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-trwx8_kube-system(9897b791-32b8-489d-b2bb-407f3c85a8e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-trwx8_kube-system(9897b791-32b8-489d-b2bb-407f3c85a8e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-trwx8" podUID="9897b791-32b8-489d-b2bb-407f3c85a8e0" Jan 24 00:46:16.534416 containerd[1468]: time="2026-01-24T00:46:16.534332958Z" level=error msg="Failed to destroy network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.534840 containerd[1468]: time="2026-01-24T00:46:16.534796375Z" level=error msg="encountered an error cleaning up failed sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.534981 containerd[1468]: time="2026-01-24T00:46:16.534880796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ln7b,Uid:bc52092c-b025-48a8-bd58-59cac1d3f427,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.536579 kubelet[2637]: E0124 00:46:16.535176 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.536579 kubelet[2637]: E0124 00:46:16.535280 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8ln7b" Jan 24 00:46:16.536579 kubelet[2637]: E0124 00:46:16.535313 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8ln7b" Jan 24 00:46:16.536783 kubelet[2637]: E0124 00:46:16.535377 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8ln7b_kube-system(bc52092c-b025-48a8-bd58-59cac1d3f427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8ln7b_kube-system(bc52092c-b025-48a8-bd58-59cac1d3f427)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8ln7b" podUID="bc52092c-b025-48a8-bd58-59cac1d3f427" Jan 24 00:46:16.567010 containerd[1468]: time="2026-01-24T00:46:16.566480458Z" level=error msg="Failed to destroy network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.567370 containerd[1468]: time="2026-01-24T00:46:16.567296276Z" level=error msg="encountered an error cleaning up failed sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.567370 containerd[1468]: time="2026-01-24T00:46:16.567371494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-w9z5c,Uid:35a4077d-452d-4ef7-8393-2463352fe219,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.568400 kubelet[2637]: E0124 00:46:16.567795 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.568400 kubelet[2637]: E0124 00:46:16.567868 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" Jan 24 00:46:16.568400 kubelet[2637]: E0124 00:46:16.567906 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" Jan 24 00:46:16.568618 kubelet[2637]: E0124 00:46:16.567960 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6b4f6b7-w9z5c_calico-apiserver(35a4077d-452d-4ef7-8393-2463352fe219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6b4f6b7-w9z5c_calico-apiserver(35a4077d-452d-4ef7-8393-2463352fe219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:16.578849 containerd[1468]: time="2026-01-24T00:46:16.578788770Z" level=error msg="Failed to destroy network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.579308 containerd[1468]: time="2026-01-24T00:46:16.579250660Z" level=error msg="encountered an error cleaning up failed sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.579454 containerd[1468]: time="2026-01-24T00:46:16.579332553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-r8txm,Uid:c8ef6693-42b2-4015-8f31-4aeadd5a6288,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.579639 kubelet[2637]: E0124 00:46:16.579589 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.579746 kubelet[2637]: E0124 00:46:16.579657 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" Jan 24 00:46:16.579746 kubelet[2637]: E0124 00:46:16.579690 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" Jan 24 00:46:16.579859 kubelet[2637]: E0124 00:46:16.579795 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6b4f6b7-r8txm_calico-apiserver(c8ef6693-42b2-4015-8f31-4aeadd5a6288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6b4f6b7-r8txm_calico-apiserver(c8ef6693-42b2-4015-8f31-4aeadd5a6288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:46:16.829027 containerd[1468]: time="2026-01-24T00:46:16.828971157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66496bc8f5-xm522,Uid:2872f4e1-5630-449e-b234-4abce1bebc05,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:16.883532 containerd[1468]: time="2026-01-24T00:46:16.883432945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pshxz,Uid:10921630-8357-43e3-be35-29668acdc0c4,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:16.947095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5-shm.mount: Deactivated successfully. Jan 24 00:46:16.970169 containerd[1468]: time="2026-01-24T00:46:16.970099857Z" level=error msg="Failed to destroy network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.976357 containerd[1468]: time="2026-01-24T00:46:16.972236088Z" level=error msg="encountered an error cleaning up failed sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.976357 containerd[1468]: time="2026-01-24T00:46:16.972319900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66496bc8f5-xm522,Uid:2872f4e1-5630-449e-b234-4abce1bebc05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.976804 kubelet[2637]: E0124 00:46:16.972691 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:16.976804 kubelet[2637]: E0124 00:46:16.972785 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66496bc8f5-xm522" Jan 24 00:46:16.976804 kubelet[2637]: E0124 00:46:16.972821 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66496bc8f5-xm522" Jan 24 00:46:16.977003 kubelet[2637]: E0124 00:46:16.972893 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66496bc8f5-xm522_calico-system(2872f4e1-5630-449e-b234-4abce1bebc05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66496bc8f5-xm522_calico-system(2872f4e1-5630-449e-b234-4abce1bebc05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66496bc8f5-xm522" podUID="2872f4e1-5630-449e-b234-4abce1bebc05" Jan 24 00:46:16.981411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7-shm.mount: Deactivated successfully. Jan 24 00:46:17.067743 containerd[1468]: time="2026-01-24T00:46:17.067242729Z" level=error msg="Failed to destroy network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.067917 containerd[1468]: time="2026-01-24T00:46:17.067762157Z" level=error msg="encountered an error cleaning up failed sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.067917 containerd[1468]: time="2026-01-24T00:46:17.067833527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pshxz,Uid:10921630-8357-43e3-be35-29668acdc0c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.068302 kubelet[2637]: E0124 00:46:17.068156 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.068436 kubelet[2637]: E0124 00:46:17.068354 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pshxz" Jan 24 00:46:17.068436 kubelet[2637]: E0124 00:46:17.068396 2637 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pshxz" Jan 24 00:46:17.068616 kubelet[2637]: E0124 00:46:17.068462 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pshxz_calico-system(10921630-8357-43e3-be35-29668acdc0c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pshxz_calico-system(10921630-8357-43e3-be35-29668acdc0c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:46:17.079859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42-shm.mount: Deactivated successfully. Jan 24 00:46:17.241231 kubelet[2637]: I0124 00:46:17.239774 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:17.242320 containerd[1468]: time="2026-01-24T00:46:17.242275116Z" level=info msg="StopPodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\"" Jan 24 00:46:17.242588 containerd[1468]: time="2026-01-24T00:46:17.242555644Z" level=info msg="Ensure that sandbox f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68 in task-service has been cleanup successfully" Jan 24 00:46:17.260883 kubelet[2637]: I0124 00:46:17.260848 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:17.266164 containerd[1468]: time="2026-01-24T00:46:17.266107954Z" level=info msg="StopPodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\"" Jan 24 00:46:17.268235 containerd[1468]: time="2026-01-24T00:46:17.268168245Z" level=info msg="Ensure that sandbox 46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0 in task-service has been cleanup successfully" Jan 24 00:46:17.271839 kubelet[2637]: I0124 00:46:17.271790 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:17.275370 containerd[1468]: time="2026-01-24T00:46:17.275141846Z" level=info msg="StopPodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\"" Jan 24 00:46:17.278876 kubelet[2637]: I0124 00:46:17.278242 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:17.279916 containerd[1468]: time="2026-01-24T00:46:17.279545339Z" level=info msg="StopPodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\"" Jan 24 00:46:17.283767 containerd[1468]: time="2026-01-24T00:46:17.283336577Z" level=info msg="Ensure that sandbox 773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d in task-service has been cleanup successfully" Jan 24 00:46:17.285622 kubelet[2637]: I0124 00:46:17.285582 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:17.287272 containerd[1468]: time="2026-01-24T00:46:17.283421816Z" level=info msg="Ensure that sandbox c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5 in task-service has been cleanup successfully" Jan 24 00:46:17.296201 containerd[1468]: time="2026-01-24T00:46:17.296137689Z" level=info msg="StopPodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\"" Jan 24 00:46:17.296912 containerd[1468]: time="2026-01-24T00:46:17.296868645Z" level=info msg="Ensure that sandbox 97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42 in task-service has been cleanup successfully" Jan 24 00:46:17.313644 kubelet[2637]: I0124 00:46:17.313605 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:17.319790 containerd[1468]: time="2026-01-24T00:46:17.319738727Z" level=info msg="StopPodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\"" Jan 24 00:46:17.320131 containerd[1468]: time="2026-01-24T00:46:17.320091527Z" level=info msg="Ensure that sandbox 67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7 in task-service has been cleanup successfully" Jan 24 00:46:17.360141 kubelet[2637]: I0124 00:46:17.358334 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:17.363361 containerd[1468]: time="2026-01-24T00:46:17.363311111Z" level=info msg="StopPodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\"" Jan 24 00:46:17.367654 containerd[1468]: time="2026-01-24T00:46:17.367612126Z" level=info msg="Ensure that sandbox 156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162 in task-service has been cleanup successfully" Jan 24 00:46:17.456427 containerd[1468]: time="2026-01-24T00:46:17.456352572Z" level=error msg="StopPodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" failed" error="failed to destroy network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.456982 kubelet[2637]: E0124 00:46:17.456919 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:17.457121 kubelet[2637]: E0124 00:46:17.457002 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68"} Jan 24 00:46:17.457121 kubelet[2637]: E0124 00:46:17.457056 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc52092c-b025-48a8-bd58-59cac1d3f427\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.457121 kubelet[2637]: E0124 00:46:17.457093 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc52092c-b025-48a8-bd58-59cac1d3f427\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8ln7b" podUID="bc52092c-b025-48a8-bd58-59cac1d3f427" Jan 24 00:46:17.467809 containerd[1468]: time="2026-01-24T00:46:17.467611784Z" level=error msg="StopPodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" failed" error="failed to destroy network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.470291 kubelet[2637]: E0124 00:46:17.470138 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:17.470291 kubelet[2637]: E0124 00:46:17.470216 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0"} Jan 24 00:46:17.470291 kubelet[2637]: E0124 00:46:17.470268 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8ef6693-42b2-4015-8f31-4aeadd5a6288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.470291 kubelet[2637]: E0124 00:46:17.470301 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8ef6693-42b2-4015-8f31-4aeadd5a6288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:46:17.487686 containerd[1468]: time="2026-01-24T00:46:17.487104241Z" level=error msg="StopPodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" failed" error="failed to destroy network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.487878 kubelet[2637]: E0124 00:46:17.487420 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:17.487878 kubelet[2637]: E0124 00:46:17.487493 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d"} Jan 24 00:46:17.487878 kubelet[2637]: E0124 00:46:17.487554 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9897b791-32b8-489d-b2bb-407f3c85a8e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.487878 kubelet[2637]: E0124 00:46:17.487592 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9897b791-32b8-489d-b2bb-407f3c85a8e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-trwx8" podUID="9897b791-32b8-489d-b2bb-407f3c85a8e0" Jan 24 00:46:17.490227 containerd[1468]: time="2026-01-24T00:46:17.490145567Z" level=error msg="StopPodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" failed" error="failed to destroy network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.490678 kubelet[2637]: E0124 00:46:17.490461 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:17.490678 kubelet[2637]: E0124 00:46:17.490522 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7"} Jan 24 00:46:17.490678 kubelet[2637]: E0124 00:46:17.490570 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2872f4e1-5630-449e-b234-4abce1bebc05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.490678 kubelet[2637]: E0124 00:46:17.490611 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2872f4e1-5630-449e-b234-4abce1bebc05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66496bc8f5-xm522" podUID="2872f4e1-5630-449e-b234-4abce1bebc05" Jan 24 00:46:17.501849 containerd[1468]: time="2026-01-24T00:46:17.501759152Z" level=error msg="StopPodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" failed" error="failed to destroy network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.502126 kubelet[2637]: E0124 00:46:17.502081 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:17.502339 kubelet[2637]: E0124 00:46:17.502153 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5"} Jan 24 00:46:17.502958 kubelet[2637]: E0124 00:46:17.502892 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb162143-7b21-40da-95af-2a95960643a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.503145 kubelet[2637]: E0124 00:46:17.502952 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb162143-7b21-40da-95af-2a95960643a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:17.520582 containerd[1468]: time="2026-01-24T00:46:17.520516442Z" level=error msg="StopPodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" failed" error="failed to destroy network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.520855 kubelet[2637]: E0124 00:46:17.520809 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:17.520997 kubelet[2637]: E0124 00:46:17.520880 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42"} Jan 24 00:46:17.520997 kubelet[2637]: E0124 00:46:17.520939 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10921630-8357-43e3-be35-29668acdc0c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.520997 kubelet[2637]: E0124 00:46:17.520978 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10921630-8357-43e3-be35-29668acdc0c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:46:17.542482 containerd[1468]: time="2026-01-24T00:46:17.541944498Z" level=error msg="StopPodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" failed" error="failed to destroy network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:46:17.542664 kubelet[2637]: E0124 00:46:17.542271 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:17.542664 kubelet[2637]: E0124 00:46:17.542337 2637 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162"} Jan 24 00:46:17.542664 kubelet[2637]: E0124 00:46:17.542388 2637 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35a4077d-452d-4ef7-8393-2463352fe219\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:46:17.542664 kubelet[2637]: E0124 00:46:17.542429 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35a4077d-452d-4ef7-8393-2463352fe219\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:23.493230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885588404.mount: Deactivated successfully. Jan 24 00:46:23.522700 containerd[1468]: time="2026-01-24T00:46:23.522610650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:23.523994 containerd[1468]: time="2026-01-24T00:46:23.523912660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:46:23.526906 containerd[1468]: time="2026-01-24T00:46:23.525015971Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:23.529442 containerd[1468]: time="2026-01-24T00:46:23.528204984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:23.529442 containerd[1468]: time="2026-01-24T00:46:23.529254705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.297293914s" Jan 24 00:46:23.529442 containerd[1468]: time="2026-01-24T00:46:23.529298596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:46:23.554694 containerd[1468]: time="2026-01-24T00:46:23.554651569Z" level=info msg="CreateContainer within sandbox \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:46:23.577591 containerd[1468]: time="2026-01-24T00:46:23.577526923Z" level=info msg="CreateContainer within sandbox \"bb589e3c8de17f05b32053881819b8cd3c861a997141279990a67d1c94a44e9e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97273ee06ef14a28959494ab712cd9ff2ae628cca54627fba2bd4de26d81b266\"" Jan 24 00:46:23.578631 containerd[1468]: time="2026-01-24T00:46:23.578509512Z" level=info msg="StartContainer for \"97273ee06ef14a28959494ab712cd9ff2ae628cca54627fba2bd4de26d81b266\"" Jan 24 00:46:23.618451 systemd[1]: Started cri-containerd-97273ee06ef14a28959494ab712cd9ff2ae628cca54627fba2bd4de26d81b266.scope - libcontainer container 97273ee06ef14a28959494ab712cd9ff2ae628cca54627fba2bd4de26d81b266. Jan 24 00:46:23.666112 containerd[1468]: time="2026-01-24T00:46:23.665763305Z" level=info msg="StartContainer for \"97273ee06ef14a28959494ab712cd9ff2ae628cca54627fba2bd4de26d81b266\" returns successfully" Jan 24 00:46:23.797936 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:46:23.798146 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:46:23.934576 containerd[1468]: time="2026-01-24T00:46:23.934095838Z" level=info msg="StopPodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\"" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.037 [INFO][3790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.038 [INFO][3790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" iface="eth0" netns="/var/run/netns/cni-a3059036-2ca2-aa00-9be2-b2ec0de2e4ee" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.039 [INFO][3790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" iface="eth0" netns="/var/run/netns/cni-a3059036-2ca2-aa00-9be2-b2ec0de2e4ee" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.039 [INFO][3790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" iface="eth0" netns="/var/run/netns/cni-a3059036-2ca2-aa00-9be2-b2ec0de2e4ee" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.039 [INFO][3790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.039 [INFO][3790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.086 [INFO][3798] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.087 [INFO][3798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.087 [INFO][3798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.103 [WARNING][3798] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.103 [INFO][3798] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.106 [INFO][3798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:24.118298 containerd[1468]: 2026-01-24 00:46:24.113 [INFO][3790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:24.118298 containerd[1468]: time="2026-01-24T00:46:24.117708900Z" level=info msg="TearDown network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" successfully" Jan 24 00:46:24.118298 containerd[1468]: time="2026-01-24T00:46:24.117781819Z" level=info msg="StopPodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" returns successfully" Jan 24 00:46:24.219922 kubelet[2637]: I0124 00:46:24.219865 2637 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-backend-key-pair\") pod \"2872f4e1-5630-449e-b234-4abce1bebc05\" (UID: \"2872f4e1-5630-449e-b234-4abce1bebc05\") " Jan 24 00:46:24.221697 kubelet[2637]: I0124 00:46:24.219936 2637 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgcxx\" (UniqueName: \"kubernetes.io/projected/2872f4e1-5630-449e-b234-4abce1bebc05-kube-api-access-pgcxx\") pod \"2872f4e1-5630-449e-b234-4abce1bebc05\" (UID: \"2872f4e1-5630-449e-b234-4abce1bebc05\") " Jan 24 00:46:24.221697 kubelet[2637]: I0124 00:46:24.219967 2637 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-ca-bundle\") pod \"2872f4e1-5630-449e-b234-4abce1bebc05\" (UID: \"2872f4e1-5630-449e-b234-4abce1bebc05\") " Jan 24 00:46:24.221697 kubelet[2637]: I0124 00:46:24.220880 2637 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2872f4e1-5630-449e-b234-4abce1bebc05" (UID: "2872f4e1-5630-449e-b234-4abce1bebc05"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:46:24.227593 kubelet[2637]: I0124 00:46:24.227002 2637 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2872f4e1-5630-449e-b234-4abce1bebc05" (UID: "2872f4e1-5630-449e-b234-4abce1bebc05"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:46:24.229373 kubelet[2637]: I0124 00:46:24.229329 2637 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2872f4e1-5630-449e-b234-4abce1bebc05-kube-api-access-pgcxx" (OuterVolumeSpecName: "kube-api-access-pgcxx") pod "2872f4e1-5630-449e-b234-4abce1bebc05" (UID: "2872f4e1-5630-449e-b234-4abce1bebc05"). InnerVolumeSpecName "kube-api-access-pgcxx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:46:24.321072 kubelet[2637]: I0124 00:46:24.320998 2637 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" DevicePath \"\"" Jan 24 00:46:24.321072 kubelet[2637]: I0124 00:46:24.321047 2637 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2872f4e1-5630-449e-b234-4abce1bebc05-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" DevicePath \"\"" Jan 24 00:46:24.321072 kubelet[2637]: I0124 00:46:24.321067 2637 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgcxx\" (UniqueName: \"kubernetes.io/projected/2872f4e1-5630-449e-b234-4abce1bebc05-kube-api-access-pgcxx\") on node \"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58\" DevicePath \"\"" Jan 24 00:46:24.387808 systemd[1]: Removed slice kubepods-besteffort-pod2872f4e1_5630_449e_b234_4abce1bebc05.slice - libcontainer container kubepods-besteffort-pod2872f4e1_5630_449e_b234_4abce1bebc05.slice. Jan 24 00:46:24.445300 kubelet[2637]: I0124 00:46:24.445206 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9kcxc" podStartSLOduration=2.008022356 podStartE2EDuration="18.445156322s" podCreationTimestamp="2026-01-24 00:46:06 +0000 UTC" firstStartedPulling="2026-01-24 00:46:07.093411615 +0000 UTC m=+25.195975444" lastFinishedPulling="2026-01-24 00:46:23.530545575 +0000 UTC m=+41.633109410" observedRunningTime="2026-01-24 00:46:24.427475312 +0000 UTC m=+42.530039157" watchObservedRunningTime="2026-01-24 00:46:24.445156322 +0000 UTC m=+42.547720167" Jan 24 00:46:24.498413 systemd[1]: run-netns-cni\x2da3059036\x2d2ca2\x2daa00\x2d9be2\x2db2ec0de2e4ee.mount: Deactivated successfully. Jan 24 00:46:24.499048 systemd[1]: var-lib-kubelet-pods-2872f4e1\x2d5630\x2d449e\x2db234\x2d4abce1bebc05-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:46:24.499602 systemd[1]: var-lib-kubelet-pods-2872f4e1\x2d5630\x2d449e\x2db234\x2d4abce1bebc05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpgcxx.mount: Deactivated successfully. Jan 24 00:46:24.517126 systemd[1]: Created slice kubepods-besteffort-pod45bea70f_544b_4bae_b0e7_4aaa5e7a4a02.slice - libcontainer container kubepods-besteffort-pod45bea70f_544b_4bae_b0e7_4aaa5e7a4a02.slice. Jan 24 00:46:24.625470 kubelet[2637]: I0124 00:46:24.625398 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsczk\" (UniqueName: \"kubernetes.io/projected/45bea70f-544b-4bae-b0e7-4aaa5e7a4a02-kube-api-access-zsczk\") pod \"whisker-68bfccf66c-47vl8\" (UID: \"45bea70f-544b-4bae-b0e7-4aaa5e7a4a02\") " pod="calico-system/whisker-68bfccf66c-47vl8" Jan 24 00:46:24.625470 kubelet[2637]: I0124 00:46:24.625463 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/45bea70f-544b-4bae-b0e7-4aaa5e7a4a02-whisker-backend-key-pair\") pod \"whisker-68bfccf66c-47vl8\" (UID: \"45bea70f-544b-4bae-b0e7-4aaa5e7a4a02\") " pod="calico-system/whisker-68bfccf66c-47vl8" Jan 24 00:46:24.625710 kubelet[2637]: I0124 00:46:24.625491 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bea70f-544b-4bae-b0e7-4aaa5e7a4a02-whisker-ca-bundle\") pod \"whisker-68bfccf66c-47vl8\" (UID: \"45bea70f-544b-4bae-b0e7-4aaa5e7a4a02\") " pod="calico-system/whisker-68bfccf66c-47vl8" Jan 24 00:46:24.826417 containerd[1468]: time="2026-01-24T00:46:24.826363500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bfccf66c-47vl8,Uid:45bea70f-544b-4bae-b0e7-4aaa5e7a4a02,Namespace:calico-system,Attempt:0,}" Jan 24 00:46:24.984572 systemd-networkd[1383]: califcadb4058fe: Link UP Jan 24 00:46:24.985247 systemd-networkd[1383]: califcadb4058fe: Gained carrier Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.875 [INFO][3844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.889 [INFO][3844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0 whisker-68bfccf66c- calico-system 45bea70f-544b-4bae-b0e7-4aaa5e7a4a02 916 0 2026-01-24 00:46:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68bfccf66c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 whisker-68bfccf66c-47vl8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califcadb4058fe [] [] }} ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.889 [INFO][3844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.923 [INFO][3855] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" HandleID="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.924 [INFO][3855] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" HandleID="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"whisker-68bfccf66c-47vl8", "timestamp":"2026-01-24 00:46:24.92394764 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.924 [INFO][3855] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.924 [INFO][3855] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.924 [INFO][3855] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.935 [INFO][3855] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.939 [INFO][3855] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.945 [INFO][3855] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.947 [INFO][3855] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.950 [INFO][3855] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.950 [INFO][3855] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.952 [INFO][3855] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.958 [INFO][3855] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.969 [INFO][3855] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.193/26] block=192.168.104.192/26 handle="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.969 [INFO][3855] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.193/26] handle="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.969 [INFO][3855] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:25.008000 containerd[1468]: 2026-01-24 00:46:24.969 [INFO][3855] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.193/26] IPv6=[] ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" HandleID="k8s-pod-network.b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.009312 containerd[1468]: 2026-01-24 00:46:24.972 [INFO][3844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0", GenerateName:"whisker-68bfccf66c-", Namespace:"calico-system", SelfLink:"", UID:"45bea70f-544b-4bae-b0e7-4aaa5e7a4a02", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68bfccf66c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"whisker-68bfccf66c-47vl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califcadb4058fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:25.009312 containerd[1468]: 2026-01-24 00:46:24.972 [INFO][3844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.193/32] ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.009312 containerd[1468]: 2026-01-24 00:46:24.972 [INFO][3844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcadb4058fe ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.009312 containerd[1468]: 2026-01-24 00:46:24.986 [INFO][3844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.009312 containerd[1468]: 2026-01-24 00:46:24.987 [INFO][3844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0", GenerateName:"whisker-68bfccf66c-", Namespace:"calico-system", SelfLink:"", UID:"45bea70f-544b-4bae-b0e7-4aaa5e7a4a02", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68bfccf66c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d", Pod:"whisker-68bfccf66c-47vl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califcadb4058fe", MAC:"9a:b4:78:46:1d:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:25.009312 containerd[1468]: 2026-01-24 00:46:25.001 [INFO][3844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d" Namespace="calico-system" Pod="whisker-68bfccf66c-47vl8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--68bfccf66c--47vl8-eth0" Jan 24 00:46:25.042535 containerd[1468]: time="2026-01-24T00:46:25.041055960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:25.042535 containerd[1468]: time="2026-01-24T00:46:25.041124597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:25.042535 containerd[1468]: time="2026-01-24T00:46:25.041144161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:25.042535 containerd[1468]: time="2026-01-24T00:46:25.041285266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:25.075479 systemd[1]: Started cri-containerd-b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d.scope - libcontainer container b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d. Jan 24 00:46:25.132402 containerd[1468]: time="2026-01-24T00:46:25.132327087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bfccf66c-47vl8,Uid:45bea70f-544b-4bae-b0e7-4aaa5e7a4a02,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3364c39b8c2229093d0394a90b7fae4e8a9ff8e1649a4202661785932f5481d\"" Jan 24 00:46:25.137058 containerd[1468]: time="2026-01-24T00:46:25.136509207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:46:25.302414 containerd[1468]: time="2026-01-24T00:46:25.302340756Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:25.303559 containerd[1468]: time="2026-01-24T00:46:25.303498857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:46:25.305311 containerd[1468]: time="2026-01-24T00:46:25.303625041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:46:25.305416 kubelet[2637]: E0124 00:46:25.303854 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:25.305416 kubelet[2637]: E0124 00:46:25.303927 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:25.306524 kubelet[2637]: E0124 00:46:25.304130 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5ce10c0eba4c41889cfe691f8779a69a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68bfccf66c-47vl8_calico-system(45bea70f-544b-4bae-b0e7-4aaa5e7a4a02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:25.309572 containerd[1468]: time="2026-01-24T00:46:25.309531551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:46:25.490949 containerd[1468]: time="2026-01-24T00:46:25.490793973Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:25.494982 containerd[1468]: time="2026-01-24T00:46:25.494912638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:46:25.494982 containerd[1468]: time="2026-01-24T00:46:25.495018986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:46:25.496735 kubelet[2637]: E0124 00:46:25.495472 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:25.496735 kubelet[2637]: E0124 00:46:25.495547 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:25.496938 kubelet[2637]: E0124 00:46:25.495742 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68bfccf66c-47vl8_calico-system(45bea70f-544b-4bae-b0e7-4aaa5e7a4a02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:25.499855 kubelet[2637]: E0124 00:46:25.499710 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:46:26.090473 kubelet[2637]: I0124 00:46:26.090369 2637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2872f4e1-5630-449e-b234-4abce1bebc05" path="/var/lib/kubelet/pods/2872f4e1-5630-449e-b234-4abce1bebc05/volumes" Jan 24 00:46:26.392096 kubelet[2637]: E0124 00:46:26.391798 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:46:26.670677 systemd-networkd[1383]: califcadb4058fe: Gained IPv6LL Jan 24 00:46:28.089215 containerd[1468]: time="2026-01-24T00:46:28.087954260Z" level=info msg="StopPodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\"" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.188 [INFO][4072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.190 [INFO][4072] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" iface="eth0" netns="/var/run/netns/cni-6654c499-76f5-2e3b-74f2-b2c16e0d247d" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.190 [INFO][4072] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" iface="eth0" netns="/var/run/netns/cni-6654c499-76f5-2e3b-74f2-b2c16e0d247d" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.191 [INFO][4072] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" iface="eth0" netns="/var/run/netns/cni-6654c499-76f5-2e3b-74f2-b2c16e0d247d" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.191 [INFO][4072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.191 [INFO][4072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.232 [INFO][4080] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.232 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.233 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.248 [WARNING][4080] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.248 [INFO][4080] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.250 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:28.255976 containerd[1468]: 2026-01-24 00:46:28.252 [INFO][4072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:28.255976 containerd[1468]: time="2026-01-24T00:46:28.255781474Z" level=info msg="TearDown network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" successfully" Jan 24 00:46:28.255976 containerd[1468]: time="2026-01-24T00:46:28.255821567Z" level=info msg="StopPodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" returns successfully" Jan 24 00:46:28.262313 containerd[1468]: time="2026-01-24T00:46:28.259747029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k95gq,Uid:eb162143-7b21-40da-95af-2a95960643a6,Namespace:calico-system,Attempt:1,}" Jan 24 00:46:28.264630 systemd[1]: run-netns-cni\x2d6654c499\x2d76f5\x2d2e3b\x2d74f2\x2db2c16e0d247d.mount: Deactivated successfully. Jan 24 00:46:28.456011 systemd-networkd[1383]: calia1bf7bfa6c3: Link UP Jan 24 00:46:28.458791 systemd-networkd[1383]: calia1bf7bfa6c3: Gained carrier Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.334 [INFO][4090] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.358 [INFO][4090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0 csi-node-driver- calico-system eb162143-7b21-40da-95af-2a95960643a6 945 0 2026-01-24 00:46:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 csi-node-driver-k95gq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia1bf7bfa6c3 [] [] }} ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.358 [INFO][4090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.399 [INFO][4102] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" HandleID="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.399 [INFO][4102] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" HandleID="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"csi-node-driver-k95gq", "timestamp":"2026-01-24 00:46:28.399410211 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.399 [INFO][4102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.399 [INFO][4102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.399 [INFO][4102] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.414 [INFO][4102] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.420 [INFO][4102] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.426 [INFO][4102] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.429 [INFO][4102] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.432 [INFO][4102] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.432 [INFO][4102] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.434 [INFO][4102] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1 Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.439 [INFO][4102] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.449 [INFO][4102] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.194/26] block=192.168.104.192/26 handle="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.449 [INFO][4102] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.194/26] handle="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.449 [INFO][4102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:28.478098 containerd[1468]: 2026-01-24 00:46:28.449 [INFO][4102] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.194/26] IPv6=[] ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" HandleID="k8s-pod-network.82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.479578 containerd[1468]: 2026-01-24 00:46:28.452 [INFO][4090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb162143-7b21-40da-95af-2a95960643a6", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"csi-node-driver-k95gq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia1bf7bfa6c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:28.479578 containerd[1468]: 2026-01-24 00:46:28.452 [INFO][4090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.194/32] ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.479578 containerd[1468]: 2026-01-24 00:46:28.452 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1bf7bfa6c3 ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.479578 containerd[1468]: 2026-01-24 00:46:28.455 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.479578 containerd[1468]: 2026-01-24 00:46:28.455 [INFO][4090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb162143-7b21-40da-95af-2a95960643a6", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1", Pod:"csi-node-driver-k95gq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia1bf7bfa6c3", MAC:"5e:0c:22:a2:ba:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:28.479578 containerd[1468]: 2026-01-24 00:46:28.474 [INFO][4090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1" Namespace="calico-system" Pod="csi-node-driver-k95gq" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:28.512925 containerd[1468]: time="2026-01-24T00:46:28.512434425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:28.512925 containerd[1468]: time="2026-01-24T00:46:28.512572120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:28.512925 containerd[1468]: time="2026-01-24T00:46:28.512604922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:28.512925 containerd[1468]: time="2026-01-24T00:46:28.512726609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:28.550416 systemd[1]: Started cri-containerd-82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1.scope - libcontainer container 82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1. Jan 24 00:46:28.584381 containerd[1468]: time="2026-01-24T00:46:28.584332836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k95gq,Uid:eb162143-7b21-40da-95af-2a95960643a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1\"" Jan 24 00:46:28.587661 containerd[1468]: time="2026-01-24T00:46:28.587620135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:46:28.752136 containerd[1468]: time="2026-01-24T00:46:28.751956282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:28.753690 containerd[1468]: time="2026-01-24T00:46:28.753475312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:46:28.753690 containerd[1468]: time="2026-01-24T00:46:28.753598429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:46:28.753931 kubelet[2637]: E0124 00:46:28.753827 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:28.753931 kubelet[2637]: E0124 00:46:28.753885 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:28.754602 kubelet[2637]: E0124 00:46:28.754085 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:28.757478 containerd[1468]: time="2026-01-24T00:46:28.757203930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:46:28.919649 containerd[1468]: time="2026-01-24T00:46:28.919577809Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:28.921002 containerd[1468]: time="2026-01-24T00:46:28.920945821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:46:28.921276 containerd[1468]: time="2026-01-24T00:46:28.920982477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:46:28.923272 kubelet[2637]: E0124 00:46:28.921303 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:28.923272 kubelet[2637]: E0124 00:46:28.921368 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:28.923272 kubelet[2637]: E0124 00:46:28.921563 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:28.923272 kubelet[2637]: E0124 00:46:28.922747 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:29.087495 containerd[1468]: time="2026-01-24T00:46:29.087340813Z" level=info msg="StopPodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\"" Jan 24 00:46:29.088964 containerd[1468]: time="2026-01-24T00:46:29.087682473Z" level=info msg="StopPodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\"" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.179 [INFO][4182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.179 [INFO][4182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" iface="eth0" netns="/var/run/netns/cni-f7868d9a-9503-8b15-6d9e-01efe6c5ce9e" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.180 [INFO][4182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" iface="eth0" netns="/var/run/netns/cni-f7868d9a-9503-8b15-6d9e-01efe6c5ce9e" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.181 [INFO][4182] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" iface="eth0" netns="/var/run/netns/cni-f7868d9a-9503-8b15-6d9e-01efe6c5ce9e" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.181 [INFO][4182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.181 [INFO][4182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.256 [INFO][4194] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.256 [INFO][4194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.256 [INFO][4194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.268 [WARNING][4194] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.269 [INFO][4194] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.273 [INFO][4194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:29.279852 containerd[1468]: 2026-01-24 00:46:29.276 [INFO][4182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:29.279852 containerd[1468]: time="2026-01-24T00:46:29.279800355Z" level=info msg="TearDown network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" successfully" Jan 24 00:46:29.285424 containerd[1468]: time="2026-01-24T00:46:29.283634618Z" level=info msg="StopPodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" returns successfully" Jan 24 00:46:29.288561 containerd[1468]: time="2026-01-24T00:46:29.288499047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ln7b,Uid:bc52092c-b025-48a8-bd58-59cac1d3f427,Namespace:kube-system,Attempt:1,}" Jan 24 00:46:29.291944 systemd[1]: run-netns-cni\x2df7868d9a\x2d9503\x2d8b15\x2d6d9e\x2d01efe6c5ce9e.mount: Deactivated successfully. Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.173 [INFO][4177] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.175 [INFO][4177] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" iface="eth0" netns="/var/run/netns/cni-bb753a37-ec1c-829e-3899-d6b556ebe987" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.176 [INFO][4177] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" iface="eth0" netns="/var/run/netns/cni-bb753a37-ec1c-829e-3899-d6b556ebe987" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.177 [INFO][4177] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" iface="eth0" netns="/var/run/netns/cni-bb753a37-ec1c-829e-3899-d6b556ebe987" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.177 [INFO][4177] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.178 [INFO][4177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.255 [INFO][4192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.258 [INFO][4192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.271 [INFO][4192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.299 [WARNING][4192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.299 [INFO][4192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.302 [INFO][4192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:29.334272 containerd[1468]: 2026-01-24 00:46:29.306 [INFO][4177] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:29.343288 containerd[1468]: time="2026-01-24T00:46:29.337265460Z" level=info msg="TearDown network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" successfully" Jan 24 00:46:29.343288 containerd[1468]: time="2026-01-24T00:46:29.337310301Z" level=info msg="StopPodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" returns successfully" Jan 24 00:46:29.343288 containerd[1468]: time="2026-01-24T00:46:29.338395091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-w9z5c,Uid:35a4077d-452d-4ef7-8393-2463352fe219,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:46:29.340762 systemd[1]: run-netns-cni\x2dbb753a37\x2dec1c\x2d829e\x2d3899\x2dd6b556ebe987.mount: Deactivated successfully. Jan 24 00:46:29.433929 kubelet[2637]: E0124 00:46:29.433807 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:29.646679 systemd-networkd[1383]: calicbceb4083d3: Link UP Jan 24 00:46:29.650199 systemd-networkd[1383]: calicbceb4083d3: Gained carrier Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.449 [INFO][4212] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.470 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0 coredns-668d6bf9bc- kube-system bc52092c-b025-48a8-bd58-59cac1d3f427 958 0 2026-01-24 00:45:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 coredns-668d6bf9bc-8ln7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbceb4083d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.471 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.559 [INFO][4244] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" HandleID="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.560 [INFO][4244] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" HandleID="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003824e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"coredns-668d6bf9bc-8ln7b", "timestamp":"2026-01-24 00:46:29.559599868 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.560 [INFO][4244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.560 [INFO][4244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.561 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.577 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.583 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.593 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.596 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.600 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.600 [INFO][4244] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.603 [INFO][4244] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.612 [INFO][4244] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.621 [INFO][4244] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.195/26] block=192.168.104.192/26 handle="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.621 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.195/26] handle="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.621 [INFO][4244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:29.679644 containerd[1468]: 2026-01-24 00:46:29.621 [INFO][4244] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.195/26] IPv6=[] ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" HandleID="k8s-pod-network.714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.681023 containerd[1468]: 2026-01-24 00:46:29.636 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc52092c-b025-48a8-bd58-59cac1d3f427", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"coredns-668d6bf9bc-8ln7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbceb4083d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:29.681023 containerd[1468]: 2026-01-24 00:46:29.637 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.195/32] ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.681023 containerd[1468]: 2026-01-24 00:46:29.639 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbceb4083d3 ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.681023 containerd[1468]: 2026-01-24 00:46:29.648 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.681023 containerd[1468]: 2026-01-24 00:46:29.650 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc52092c-b025-48a8-bd58-59cac1d3f427", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c", Pod:"coredns-668d6bf9bc-8ln7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbceb4083d3", MAC:"aa:d2:88:6b:fa:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:29.681023 containerd[1468]: 2026-01-24 00:46:29.672 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ln7b" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:29.719399 containerd[1468]: time="2026-01-24T00:46:29.719057423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:29.719399 containerd[1468]: time="2026-01-24T00:46:29.719149203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:29.719399 containerd[1468]: time="2026-01-24T00:46:29.719177049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:29.721587 containerd[1468]: time="2026-01-24T00:46:29.720936512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:29.763095 systemd[1]: Started cri-containerd-714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c.scope - libcontainer container 714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c. Jan 24 00:46:29.773371 systemd-networkd[1383]: cali543549507b8: Link UP Jan 24 00:46:29.773734 systemd-networkd[1383]: cali543549507b8: Gained carrier Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.506 [INFO][4226] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.537 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0 calico-apiserver-7b6b4f6b7- calico-apiserver 35a4077d-452d-4ef7-8393-2463352fe219 957 0 2026-01-24 00:46:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6b4f6b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 calico-apiserver-7b6b4f6b7-w9z5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali543549507b8 [] [] }} ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.537 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.620 [INFO][4255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" HandleID="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.621 [INFO][4255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" HandleID="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"calico-apiserver-7b6b4f6b7-w9z5c", "timestamp":"2026-01-24 00:46:29.620421342 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.621 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.622 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.622 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.678 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.692 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.707 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.710 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.717 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.718 [INFO][4255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.720 [INFO][4255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1 Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.731 [INFO][4255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.750 [INFO][4255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.196/26] block=192.168.104.192/26 handle="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.750 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.196/26] handle="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.750 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:29.803493 containerd[1468]: 2026-01-24 00:46:29.750 [INFO][4255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.196/26] IPv6=[] ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" HandleID="k8s-pod-network.877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.805092 containerd[1468]: 2026-01-24 00:46:29.762 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"35a4077d-452d-4ef7-8393-2463352fe219", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"calico-apiserver-7b6b4f6b7-w9z5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali543549507b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:29.805092 containerd[1468]: 2026-01-24 00:46:29.762 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.196/32] ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.805092 containerd[1468]: 2026-01-24 00:46:29.762 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali543549507b8 ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.805092 containerd[1468]: 2026-01-24 00:46:29.769 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.805092 containerd[1468]: 2026-01-24 00:46:29.769 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"35a4077d-452d-4ef7-8393-2463352fe219", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1", Pod:"calico-apiserver-7b6b4f6b7-w9z5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali543549507b8", MAC:"ba:00:b3:d9:f0:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:29.805092 containerd[1468]: 2026-01-24 00:46:29.799 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-w9z5c" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:29.840389 containerd[1468]: time="2026-01-24T00:46:29.840173286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:29.840389 containerd[1468]: time="2026-01-24T00:46:29.840305391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:29.840389 containerd[1468]: time="2026-01-24T00:46:29.840325525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:29.840706 containerd[1468]: time="2026-01-24T00:46:29.840449416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:29.870682 systemd[1]: Started cri-containerd-877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1.scope - libcontainer container 877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1. Jan 24 00:46:29.897432 containerd[1468]: time="2026-01-24T00:46:29.897044464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ln7b,Uid:bc52092c-b025-48a8-bd58-59cac1d3f427,Namespace:kube-system,Attempt:1,} returns sandbox id \"714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c\"" Jan 24 00:46:29.902944 containerd[1468]: time="2026-01-24T00:46:29.902717628Z" level=info msg="CreateContainer within sandbox \"714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:46:29.923921 containerd[1468]: time="2026-01-24T00:46:29.923860164Z" level=info msg="CreateContainer within sandbox \"714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b5ac3d81b0202dce3c7cb57ec0e693b0da75dfde6551e55de34870c6c29d26e\"" Jan 24 00:46:29.925858 containerd[1468]: time="2026-01-24T00:46:29.925813578Z" level=info msg="StartContainer for \"2b5ac3d81b0202dce3c7cb57ec0e693b0da75dfde6551e55de34870c6c29d26e\"" Jan 24 00:46:29.983241 systemd[1]: Started cri-containerd-2b5ac3d81b0202dce3c7cb57ec0e693b0da75dfde6551e55de34870c6c29d26e.scope - libcontainer container 2b5ac3d81b0202dce3c7cb57ec0e693b0da75dfde6551e55de34870c6c29d26e. Jan 24 00:46:29.996535 containerd[1468]: time="2026-01-24T00:46:29.996476538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-w9z5c,Uid:35a4077d-452d-4ef7-8393-2463352fe219,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1\"" Jan 24 00:46:30.001579 containerd[1468]: time="2026-01-24T00:46:30.001487016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:30.036223 containerd[1468]: time="2026-01-24T00:46:30.036049962Z" level=info msg="StartContainer for \"2b5ac3d81b0202dce3c7cb57ec0e693b0da75dfde6551e55de34870c6c29d26e\" returns successfully" Jan 24 00:46:30.164039 containerd[1468]: time="2026-01-24T00:46:30.163855355Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:30.165464 containerd[1468]: time="2026-01-24T00:46:30.165291242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:30.165464 containerd[1468]: time="2026-01-24T00:46:30.165344715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:30.165645 kubelet[2637]: E0124 00:46:30.165592 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:30.166123 kubelet[2637]: E0124 00:46:30.165656 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:30.166123 kubelet[2637]: E0124 00:46:30.165854 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b6b4f6b7-w9z5c_calico-apiserver(35a4077d-452d-4ef7-8393-2463352fe219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:30.167463 kubelet[2637]: E0124 00:46:30.167376 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:30.433642 kubelet[2637]: E0124 00:46:30.432440 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:30.442203 kubelet[2637]: E0124 00:46:30.442028 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:30.447038 systemd-networkd[1383]: calia1bf7bfa6c3: Gained IPv6LL Jan 24 00:46:30.495456 kubelet[2637]: I0124 00:46:30.495362 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8ln7b" podStartSLOduration=41.495321725 podStartE2EDuration="41.495321725s" podCreationTimestamp="2026-01-24 00:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:46:30.494960349 +0000 UTC m=+48.597524193" watchObservedRunningTime="2026-01-24 00:46:30.495321725 +0000 UTC m=+48.597885559" Jan 24 00:46:31.022414 systemd-networkd[1383]: cali543549507b8: Gained IPv6LL Jan 24 00:46:31.088107 containerd[1468]: time="2026-01-24T00:46:31.087529468Z" level=info msg="StopPodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\"" Jan 24 00:46:31.089388 containerd[1468]: time="2026-01-24T00:46:31.088721066Z" level=info msg="StopPodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\"" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.190 [INFO][4440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.191 [INFO][4440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" iface="eth0" netns="/var/run/netns/cni-9f582a74-93cc-d2ae-9474-50006afbd73d" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.191 [INFO][4440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" iface="eth0" netns="/var/run/netns/cni-9f582a74-93cc-d2ae-9474-50006afbd73d" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.193 [INFO][4440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" iface="eth0" netns="/var/run/netns/cni-9f582a74-93cc-d2ae-9474-50006afbd73d" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.193 [INFO][4440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.193 [INFO][4440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.275 [INFO][4454] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.276 [INFO][4454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.276 [INFO][4454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.287 [WARNING][4454] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.287 [INFO][4454] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.289 [INFO][4454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:31.295433 containerd[1468]: 2026-01-24 00:46:31.293 [INFO][4440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:31.298469 containerd[1468]: time="2026-01-24T00:46:31.298284293Z" level=info msg="TearDown network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" successfully" Jan 24 00:46:31.298469 containerd[1468]: time="2026-01-24T00:46:31.298331698Z" level=info msg="StopPodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" returns successfully" Jan 24 00:46:31.302245 containerd[1468]: time="2026-01-24T00:46:31.300003112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pshxz,Uid:10921630-8357-43e3-be35-29668acdc0c4,Namespace:calico-system,Attempt:1,}" Jan 24 00:46:31.305080 systemd[1]: run-netns-cni\x2d9f582a74\x2d93cc\x2dd2ae\x2d9474\x2d50006afbd73d.mount: Deactivated successfully. Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.197 [INFO][4441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.198 [INFO][4441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" iface="eth0" netns="/var/run/netns/cni-5d93f559-c4a5-a2d7-641b-a7abf976c2f4" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.198 [INFO][4441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" iface="eth0" netns="/var/run/netns/cni-5d93f559-c4a5-a2d7-641b-a7abf976c2f4" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.199 [INFO][4441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" iface="eth0" netns="/var/run/netns/cni-5d93f559-c4a5-a2d7-641b-a7abf976c2f4" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.200 [INFO][4441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.201 [INFO][4441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.280 [INFO][4459] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.280 [INFO][4459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.289 [INFO][4459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.310 [WARNING][4459] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.310 [INFO][4459] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.314 [INFO][4459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:31.324667 containerd[1468]: 2026-01-24 00:46:31.318 [INFO][4441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:31.330484 containerd[1468]: time="2026-01-24T00:46:31.324852891Z" level=info msg="TearDown network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" successfully" Jan 24 00:46:31.330484 containerd[1468]: time="2026-01-24T00:46:31.324885887Z" level=info msg="StopPodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" returns successfully" Jan 24 00:46:31.330484 containerd[1468]: time="2026-01-24T00:46:31.328404432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8cb4b6c4-b426z,Uid:8d0e50ad-79cd-460b-a113-c524281d7733,Namespace:calico-system,Attempt:1,}" Jan 24 00:46:31.348623 systemd[1]: run-netns-cni\x2d5d93f559\x2dc4a5\x2da2d7\x2d641b\x2da7abf976c2f4.mount: Deactivated successfully. Jan 24 00:46:31.407380 systemd-networkd[1383]: calicbceb4083d3: Gained IPv6LL Jan 24 00:46:31.455925 kubelet[2637]: E0124 00:46:31.455146 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:31.640243 systemd-networkd[1383]: cali153317b0240: Link UP Jan 24 00:46:31.647487 systemd-networkd[1383]: cali153317b0240: Gained carrier Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.419 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.450 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0 goldmane-666569f655- calico-system 10921630-8357-43e3-be35-29668acdc0c4 998 0 2026-01-24 00:46:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 goldmane-666569f655-pshxz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali153317b0240 [] [] }} ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.451 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.540 [INFO][4494] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" HandleID="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.542 [INFO][4494] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" HandleID="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"goldmane-666569f655-pshxz", "timestamp":"2026-01-24 00:46:31.5408714 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.542 [INFO][4494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.542 [INFO][4494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.542 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.556 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.564 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.574 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.579 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.586 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.586 [INFO][4494] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.589 [INFO][4494] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71 Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.597 [INFO][4494] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.610 [INFO][4494] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.197/26] block=192.168.104.192/26 handle="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.610 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.197/26] handle="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.610 [INFO][4494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:31.679912 containerd[1468]: 2026-01-24 00:46:31.610 [INFO][4494] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.197/26] IPv6=[] ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" HandleID="k8s-pod-network.38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.681163 containerd[1468]: 2026-01-24 00:46:31.622 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10921630-8357-43e3-be35-29668acdc0c4", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"goldmane-666569f655-pshxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali153317b0240", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:31.681163 containerd[1468]: 2026-01-24 00:46:31.622 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.197/32] ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.681163 containerd[1468]: 2026-01-24 00:46:31.622 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali153317b0240 ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.681163 containerd[1468]: 2026-01-24 00:46:31.651 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.681163 containerd[1468]: 2026-01-24 00:46:31.652 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10921630-8357-43e3-be35-29668acdc0c4", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71", Pod:"goldmane-666569f655-pshxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali153317b0240", MAC:"ae:fa:31:2b:46:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:31.681163 containerd[1468]: 2026-01-24 00:46:31.676 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71" Namespace="calico-system" Pod="goldmane-666569f655-pshxz" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:31.745177 systemd-networkd[1383]: cali08e88f6d844: Link UP Jan 24 00:46:31.748727 systemd-networkd[1383]: cali08e88f6d844: Gained carrier Jan 24 00:46:31.769723 containerd[1468]: time="2026-01-24T00:46:31.769397796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:31.769723 containerd[1468]: time="2026-01-24T00:46:31.769470516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:31.769723 containerd[1468]: time="2026-01-24T00:46:31.769535326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:31.771066 containerd[1468]: time="2026-01-24T00:46:31.769762539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.494 [INFO][4480] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.528 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0 calico-kube-controllers-d8cb4b6c4- calico-system 8d0e50ad-79cd-460b-a113-c524281d7733 999 0 2026-01-24 00:46:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d8cb4b6c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 calico-kube-controllers-d8cb4b6c4-b426z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali08e88f6d844 [] [] }} ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.530 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.608 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" HandleID="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.608 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" HandleID="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"calico-kube-controllers-d8cb4b6c4-b426z", "timestamp":"2026-01-24 00:46:31.608658947 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.608 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.610 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.610 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.657 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.678 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.687 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.691 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.696 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.696 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.699 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.706 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.720 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.198/26] block=192.168.104.192/26 handle="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.720 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.198/26] handle="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.721 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:31.782110 containerd[1468]: 2026-01-24 00:46:31.721 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.198/26] IPv6=[] ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" HandleID="k8s-pod-network.4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.786080 containerd[1468]: 2026-01-24 00:46:31.730 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0", GenerateName:"calico-kube-controllers-d8cb4b6c4-", Namespace:"calico-system", SelfLink:"", UID:"8d0e50ad-79cd-460b-a113-c524281d7733", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8cb4b6c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"calico-kube-controllers-d8cb4b6c4-b426z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08e88f6d844", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:31.786080 containerd[1468]: 2026-01-24 00:46:31.733 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.198/32] ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.786080 containerd[1468]: 2026-01-24 00:46:31.733 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08e88f6d844 ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.786080 containerd[1468]: 2026-01-24 00:46:31.751 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.786080 containerd[1468]: 2026-01-24 00:46:31.755 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0", GenerateName:"calico-kube-controllers-d8cb4b6c4-", Namespace:"calico-system", SelfLink:"", UID:"8d0e50ad-79cd-460b-a113-c524281d7733", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8cb4b6c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c", Pod:"calico-kube-controllers-d8cb4b6c4-b426z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08e88f6d844", MAC:"1a:f0:d1:a2:b6:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:31.786080 containerd[1468]: 2026-01-24 00:46:31.775 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c" Namespace="calico-system" Pod="calico-kube-controllers-d8cb4b6c4-b426z" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:31.828433 systemd[1]: Started cri-containerd-38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71.scope - libcontainer container 38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71. Jan 24 00:46:31.871949 containerd[1468]: time="2026-01-24T00:46:31.870139933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:31.871949 containerd[1468]: time="2026-01-24T00:46:31.871580834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:31.871949 containerd[1468]: time="2026-01-24T00:46:31.871610699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:31.871949 containerd[1468]: time="2026-01-24T00:46:31.871734965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:31.904453 systemd[1]: Started cri-containerd-4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c.scope - libcontainer container 4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c. Jan 24 00:46:31.987858 containerd[1468]: time="2026-01-24T00:46:31.987790852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pshxz,Uid:10921630-8357-43e3-be35-29668acdc0c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71\"" Jan 24 00:46:31.993210 containerd[1468]: time="2026-01-24T00:46:31.992601136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:46:32.037229 kubelet[2637]: I0124 00:46:32.036437 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:46:32.046836 containerd[1468]: time="2026-01-24T00:46:32.046593114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8cb4b6c4-b426z,Uid:8d0e50ad-79cd-460b-a113-c524281d7733,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c\"" Jan 24 00:46:32.097680 containerd[1468]: time="2026-01-24T00:46:32.097113432Z" level=info msg="StopPodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\"" Jan 24 00:46:32.098240 containerd[1468]: time="2026-01-24T00:46:32.097778327Z" level=info msg="StopPodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\"" Jan 24 00:46:32.165257 containerd[1468]: time="2026-01-24T00:46:32.162883678Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:32.171982 containerd[1468]: time="2026-01-24T00:46:32.170999511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:46:32.171982 containerd[1468]: time="2026-01-24T00:46:32.171103930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:32.173423 kubelet[2637]: E0124 00:46:32.171340 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:32.173423 kubelet[2637]: E0124 00:46:32.171391 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:32.173423 kubelet[2637]: E0124 00:46:32.171680 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pshxz_calico-system(10921630-8357-43e3-be35-29668acdc0c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:32.173423 kubelet[2637]: E0124 00:46:32.173067 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:46:32.174239 containerd[1468]: time="2026-01-24T00:46:32.172490392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:46:32.357390 containerd[1468]: time="2026-01-24T00:46:32.357093325Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:32.366236 containerd[1468]: time="2026-01-24T00:46:32.366152494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:46:32.367757 containerd[1468]: time="2026-01-24T00:46:32.366512047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:46:32.374656 kubelet[2637]: E0124 00:46:32.373126 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:32.374656 kubelet[2637]: E0124 00:46:32.374171 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:32.374656 kubelet[2637]: E0124 00:46:32.374379 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc42n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d8cb4b6c4-b426z_calico-system(8d0e50ad-79cd-460b-a113-c524281d7733): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:32.375637 kubelet[2637]: E0124 00:46:32.375585 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:32.462737 kubelet[2637]: E0124 00:46:32.462311 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:32.474741 kubelet[2637]: E0124 00:46:32.473806 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.241 [INFO][4640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.244 [INFO][4640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" iface="eth0" netns="/var/run/netns/cni-c11b26ac-9ec4-e61b-4a74-6a41cd1b0912" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.244 [INFO][4640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" iface="eth0" netns="/var/run/netns/cni-c11b26ac-9ec4-e61b-4a74-6a41cd1b0912" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.244 [INFO][4640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" iface="eth0" netns="/var/run/netns/cni-c11b26ac-9ec4-e61b-4a74-6a41cd1b0912" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.244 [INFO][4640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.245 [INFO][4640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.428 [INFO][4657] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.436 [INFO][4657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.436 [INFO][4657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.467 [WARNING][4657] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.468 [INFO][4657] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.475 [INFO][4657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:32.486296 containerd[1468]: 2026-01-24 00:46:32.481 [INFO][4640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:32.491571 containerd[1468]: time="2026-01-24T00:46:32.489282932Z" level=info msg="TearDown network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" successfully" Jan 24 00:46:32.491571 containerd[1468]: time="2026-01-24T00:46:32.489325555Z" level=info msg="StopPodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" returns successfully" Jan 24 00:46:32.493233 containerd[1468]: time="2026-01-24T00:46:32.492814421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-r8txm,Uid:c8ef6693-42b2-4015-8f31-4aeadd5a6288,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:46:32.494958 systemd[1]: run-netns-cni\x2dc11b26ac\x2d9ec4\x2de61b\x2d4a74\x2d6a41cd1b0912.mount: Deactivated successfully. Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.265 [INFO][4641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.266 [INFO][4641] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" iface="eth0" netns="/var/run/netns/cni-46db19ce-f831-827e-ad6d-b2887fc80d6b" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.266 [INFO][4641] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" iface="eth0" netns="/var/run/netns/cni-46db19ce-f831-827e-ad6d-b2887fc80d6b" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.267 [INFO][4641] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" iface="eth0" netns="/var/run/netns/cni-46db19ce-f831-827e-ad6d-b2887fc80d6b" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.267 [INFO][4641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.267 [INFO][4641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.460 [INFO][4663] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.467 [INFO][4663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.476 [INFO][4663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.503 [WARNING][4663] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.504 [INFO][4663] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.507 [INFO][4663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:32.517374 containerd[1468]: 2026-01-24 00:46:32.512 [INFO][4641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:32.524218 containerd[1468]: time="2026-01-24T00:46:32.519654124Z" level=info msg="TearDown network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" successfully" Jan 24 00:46:32.524218 containerd[1468]: time="2026-01-24T00:46:32.519723179Z" level=info msg="StopPodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" returns successfully" Jan 24 00:46:32.524936 containerd[1468]: time="2026-01-24T00:46:32.524446773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trwx8,Uid:9897b791-32b8-489d-b2bb-407f3c85a8e0,Namespace:kube-system,Attempt:1,}" Jan 24 00:46:32.544314 systemd[1]: run-netns-cni\x2d46db19ce\x2df831\x2d827e\x2dad6d\x2db2887fc80d6b.mount: Deactivated successfully. Jan 24 00:46:32.850363 systemd-networkd[1383]: cali52feb2dc912: Link UP Jan 24 00:46:32.852933 systemd-networkd[1383]: cali52feb2dc912: Gained carrier Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.653 [INFO][4691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.684 [INFO][4691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0 coredns-668d6bf9bc- kube-system 9897b791-32b8-489d-b2bb-407f3c85a8e0 1028 0 2026-01-24 00:45:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 coredns-668d6bf9bc-trwx8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali52feb2dc912 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.685 [INFO][4691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.751 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" HandleID="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.752 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" HandleID="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"coredns-668d6bf9bc-trwx8", "timestamp":"2026-01-24 00:46:32.751085735 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.752 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.753 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.753 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.775 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.791 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.800 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.804 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.808 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.808 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.810 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.818 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.832 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.199/26] block=192.168.104.192/26 handle="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.832 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.199/26] handle="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.833 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:32.880986 containerd[1468]: 2026-01-24 00:46:32.833 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.199/26] IPv6=[] ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" HandleID="k8s-pod-network.c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.883649 containerd[1468]: 2026-01-24 00:46:32.837 [INFO][4691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9897b791-32b8-489d-b2bb-407f3c85a8e0", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"coredns-668d6bf9bc-trwx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52feb2dc912", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:32.883649 containerd[1468]: 2026-01-24 00:46:32.837 [INFO][4691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.199/32] ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.883649 containerd[1468]: 2026-01-24 00:46:32.838 [INFO][4691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52feb2dc912 ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.883649 containerd[1468]: 2026-01-24 00:46:32.853 [INFO][4691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.883649 containerd[1468]: 2026-01-24 00:46:32.856 [INFO][4691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9897b791-32b8-489d-b2bb-407f3c85a8e0", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab", Pod:"coredns-668d6bf9bc-trwx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52feb2dc912", MAC:"a2:ec:2f:9e:d4:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:32.883649 containerd[1468]: 2026-01-24 00:46:32.878 [INFO][4691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab" Namespace="kube-system" Pod="coredns-668d6bf9bc-trwx8" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:32.941326 containerd[1468]: time="2026-01-24T00:46:32.940257343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:32.941326 containerd[1468]: time="2026-01-24T00:46:32.940475338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:32.941326 containerd[1468]: time="2026-01-24T00:46:32.940629776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:32.941326 containerd[1468]: time="2026-01-24T00:46:32.940818615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:32.985025 systemd[1]: Started cri-containerd-c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab.scope - libcontainer container c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab. Jan 24 00:46:33.009059 systemd-networkd[1383]: calibced8d2b459: Link UP Jan 24 00:46:33.019613 systemd-networkd[1383]: calibced8d2b459: Gained carrier Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.669 [INFO][4679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.701 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0 calico-apiserver-7b6b4f6b7- calico-apiserver c8ef6693-42b2-4015-8f31-4aeadd5a6288 1027 0 2026-01-24 00:46:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6b4f6b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58 calico-apiserver-7b6b4f6b7-r8txm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibced8d2b459 [] [] }} ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.701 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.799 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" HandleID="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.799 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" HandleID="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000156a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", "pod":"calico-apiserver-7b6b4f6b7-r8txm", "timestamp":"2026-01-24 00:46:32.799168625 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.799 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.833 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.833 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58' Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.874 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.889 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.905 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.920 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.927 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.927 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.931 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.942 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.979 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.200/26] block=192.168.104.192/26 handle="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.980 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.200/26] handle="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" host="ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58" Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.980 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:33.059780 containerd[1468]: 2026-01-24 00:46:32.980 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.200/26] IPv6=[] ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" HandleID="k8s-pod-network.b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.063621 containerd[1468]: 2026-01-24 00:46:32.990 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8ef6693-42b2-4015-8f31-4aeadd5a6288", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"", Pod:"calico-apiserver-7b6b4f6b7-r8txm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibced8d2b459", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:33.063621 containerd[1468]: 2026-01-24 00:46:32.991 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.200/32] ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.063621 containerd[1468]: 2026-01-24 00:46:32.991 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibced8d2b459 ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.063621 containerd[1468]: 2026-01-24 00:46:33.019 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.063621 containerd[1468]: 2026-01-24 00:46:33.023 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8ef6693-42b2-4015-8f31-4aeadd5a6288", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f", Pod:"calico-apiserver-7b6b4f6b7-r8txm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibced8d2b459", MAC:"5e:99:6f:a3:ae:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:33.063621 containerd[1468]: 2026-01-24 00:46:33.052 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f" Namespace="calico-apiserver" Pod="calico-apiserver-7b6b4f6b7-r8txm" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:33.070670 systemd-networkd[1383]: cali08e88f6d844: Gained IPv6LL Jan 24 00:46:33.138229 containerd[1468]: time="2026-01-24T00:46:33.137166918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:33.138229 containerd[1468]: time="2026-01-24T00:46:33.137264798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:33.138229 containerd[1468]: time="2026-01-24T00:46:33.137294998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:33.138229 containerd[1468]: time="2026-01-24T00:46:33.137416046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:33.162043 containerd[1468]: time="2026-01-24T00:46:33.161387334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trwx8,Uid:9897b791-32b8-489d-b2bb-407f3c85a8e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab\"" Jan 24 00:46:33.174639 containerd[1468]: time="2026-01-24T00:46:33.174585943Z" level=info msg="CreateContainer within sandbox \"c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:46:33.213843 containerd[1468]: time="2026-01-24T00:46:33.213671191Z" level=info msg="CreateContainer within sandbox \"c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8080d2a8983246d0208381d906085be6f5656101a761779743a94a1a2b013a6\"" Jan 24 00:46:33.217212 containerd[1468]: time="2026-01-24T00:46:33.215084000Z" level=info msg="StartContainer for \"c8080d2a8983246d0208381d906085be6f5656101a761779743a94a1a2b013a6\"" Jan 24 00:46:33.215411 systemd[1]: Started cri-containerd-b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f.scope - libcontainer container b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f. Jan 24 00:46:33.278474 systemd[1]: Started cri-containerd-c8080d2a8983246d0208381d906085be6f5656101a761779743a94a1a2b013a6.scope - libcontainer container c8080d2a8983246d0208381d906085be6f5656101a761779743a94a1a2b013a6. Jan 24 00:46:33.351707 containerd[1468]: time="2026-01-24T00:46:33.351636719Z" level=info msg="StartContainer for \"c8080d2a8983246d0208381d906085be6f5656101a761779743a94a1a2b013a6\" returns successfully" Jan 24 00:46:33.508429 kubelet[2637]: E0124 00:46:33.508288 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:33.512897 kubelet[2637]: E0124 00:46:33.512429 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:46:33.514220 kernel: bpftool[4888]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:46:33.536064 containerd[1468]: time="2026-01-24T00:46:33.535389867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6b4f6b7-r8txm,Uid:c8ef6693-42b2-4015-8f31-4aeadd5a6288,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f\"" Jan 24 00:46:33.538564 containerd[1468]: time="2026-01-24T00:46:33.538527228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:33.582482 systemd-networkd[1383]: cali153317b0240: Gained IPv6LL Jan 24 00:46:33.680217 kubelet[2637]: I0124 00:46:33.679437 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-trwx8" podStartSLOduration=44.679408371 podStartE2EDuration="44.679408371s" podCreationTimestamp="2026-01-24 00:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:46:33.63768819 +0000 UTC m=+51.740252030" watchObservedRunningTime="2026-01-24 00:46:33.679408371 +0000 UTC m=+51.781972218" Jan 24 00:46:33.715715 containerd[1468]: time="2026-01-24T00:46:33.715144154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:33.717383 containerd[1468]: time="2026-01-24T00:46:33.717330640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:33.717643 containerd[1468]: time="2026-01-24T00:46:33.717535211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:33.718200 kubelet[2637]: E0124 00:46:33.718035 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:33.718478 kubelet[2637]: E0124 00:46:33.718164 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:33.719247 kubelet[2637]: E0124 00:46:33.719114 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b6b4f6b7-r8txm_calico-apiserver(c8ef6693-42b2-4015-8f31-4aeadd5a6288): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:33.720428 kubelet[2637]: E0124 00:46:33.720344 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:46:34.161594 systemd-networkd[1383]: vxlan.calico: Link UP Jan 24 00:46:34.161606 systemd-networkd[1383]: vxlan.calico: Gained carrier Jan 24 00:46:34.479473 systemd-networkd[1383]: calibced8d2b459: Gained IPv6LL Jan 24 00:46:34.514216 kubelet[2637]: E0124 00:46:34.513888 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:46:34.862465 systemd-networkd[1383]: cali52feb2dc912: Gained IPv6LL Jan 24 00:46:35.515115 kubelet[2637]: E0124 00:46:35.514895 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:46:35.950435 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Jan 24 00:46:38.882954 ntpd[1438]: Listen normally on 7 vxlan.calico 192.168.104.192:123 Jan 24 00:46:38.883171 ntpd[1438]: Listen normally on 8 califcadb4058fe [fe80::ecee:eeff:feee:eeee%4]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 7 vxlan.calico 192.168.104.192:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 8 califcadb4058fe [fe80::ecee:eeff:feee:eeee%4]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 9 calia1bf7bfa6c3 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 10 calicbceb4083d3 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 11 cali543549507b8 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 12 cali153317b0240 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 13 cali08e88f6d844 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 14 cali52feb2dc912 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 15 calibced8d2b459 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 24 00:46:38.883765 ntpd[1438]: 24 Jan 00:46:38 ntpd[1438]: Listen normally on 16 vxlan.calico [fe80::64a6:f7ff:fe55:836d%12]:123 Jan 24 00:46:38.883304 ntpd[1438]: Listen normally on 9 calia1bf7bfa6c3 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 24 00:46:38.883373 ntpd[1438]: Listen normally on 10 calicbceb4083d3 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:46:38.883429 ntpd[1438]: Listen normally on 11 cali543549507b8 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:46:38.883486 ntpd[1438]: Listen normally on 12 cali153317b0240 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:46:38.883543 ntpd[1438]: Listen normally on 13 cali08e88f6d844 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:46:38.883600 ntpd[1438]: Listen normally on 14 cali52feb2dc912 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 24 00:46:38.883656 ntpd[1438]: Listen normally on 15 calibced8d2b459 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 24 00:46:38.883712 ntpd[1438]: Listen normally on 16 vxlan.calico [fe80::64a6:f7ff:fe55:836d%12]:123 Jan 24 00:46:39.088890 containerd[1468]: time="2026-01-24T00:46:39.088842640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:46:39.283045 containerd[1468]: time="2026-01-24T00:46:39.282978547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:39.284640 containerd[1468]: time="2026-01-24T00:46:39.284507438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:46:39.284640 containerd[1468]: time="2026-01-24T00:46:39.284569553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:46:39.284854 kubelet[2637]: E0124 00:46:39.284806 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:39.285384 kubelet[2637]: E0124 00:46:39.284872 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:39.285384 kubelet[2637]: E0124 00:46:39.285039 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5ce10c0eba4c41889cfe691f8779a69a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68bfccf66c-47vl8_calico-system(45bea70f-544b-4bae-b0e7-4aaa5e7a4a02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:39.288039 containerd[1468]: time="2026-01-24T00:46:39.287688151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:46:39.442061 containerd[1468]: time="2026-01-24T00:46:39.441966954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:39.443543 containerd[1468]: time="2026-01-24T00:46:39.443484547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:46:39.444556 containerd[1468]: time="2026-01-24T00:46:39.443493093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:46:39.444646 kubelet[2637]: E0124 00:46:39.443796 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:39.444646 kubelet[2637]: E0124 00:46:39.443852 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:39.444646 kubelet[2637]: E0124 00:46:39.444014 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68bfccf66c-47vl8_calico-system(45bea70f-544b-4bae-b0e7-4aaa5e7a4a02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:39.445218 kubelet[2637]: E0124 00:46:39.445155 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:46:41.088073 containerd[1468]: time="2026-01-24T00:46:41.087836354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:46:41.244522 containerd[1468]: time="2026-01-24T00:46:41.244439244Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:41.246268 containerd[1468]: time="2026-01-24T00:46:41.246082230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:46:41.246607 containerd[1468]: time="2026-01-24T00:46:41.246106295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:46:41.246871 kubelet[2637]: E0124 00:46:41.246813 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:41.247472 kubelet[2637]: E0124 00:46:41.246888 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:41.247651 kubelet[2637]: E0124 00:46:41.247138 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:41.251843 containerd[1468]: time="2026-01-24T00:46:41.251802732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:46:41.410727 containerd[1468]: time="2026-01-24T00:46:41.410545298Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:41.412221 containerd[1468]: time="2026-01-24T00:46:41.412107350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:46:41.412396 containerd[1468]: time="2026-01-24T00:46:41.412148546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:46:41.412559 kubelet[2637]: E0124 00:46:41.412494 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:41.412700 kubelet[2637]: E0124 00:46:41.412565 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:41.412833 kubelet[2637]: E0124 00:46:41.412746 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:41.414436 kubelet[2637]: E0124 00:46:41.414368 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:42.033651 containerd[1468]: time="2026-01-24T00:46:42.033603390Z" level=info msg="StopPodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\"" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.080 [WARNING][4997] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9897b791-32b8-489d-b2bb-407f3c85a8e0", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab", Pod:"coredns-668d6bf9bc-trwx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52feb2dc912", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.081 [INFO][4997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.081 [INFO][4997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" iface="eth0" netns="" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.081 [INFO][4997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.081 [INFO][4997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.137 [INFO][5004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.138 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.138 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.147 [WARNING][5004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.147 [INFO][5004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.149 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.153061 containerd[1468]: 2026-01-24 00:46:42.151 [INFO][4997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.154371 containerd[1468]: time="2026-01-24T00:46:42.153096361Z" level=info msg="TearDown network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" successfully" Jan 24 00:46:42.154371 containerd[1468]: time="2026-01-24T00:46:42.153129773Z" level=info msg="StopPodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" returns successfully" Jan 24 00:46:42.154371 containerd[1468]: time="2026-01-24T00:46:42.154360031Z" level=info msg="RemovePodSandbox for \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\"" Jan 24 00:46:42.154531 containerd[1468]: time="2026-01-24T00:46:42.154399560Z" level=info msg="Forcibly stopping sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\"" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.228 [WARNING][5020] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9897b791-32b8-489d-b2bb-407f3c85a8e0", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"c28691d823074a5cbad090cc89425fd540ca504f9ea892a2b7122a95fac96dab", Pod:"coredns-668d6bf9bc-trwx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52feb2dc912", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.228 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.228 [INFO][5020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" iface="eth0" netns="" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.228 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.228 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.271 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.271 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.271 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.282 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.282 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" HandleID="k8s-pod-network.773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--trwx8-eth0" Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.283 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.291860 containerd[1468]: 2026-01-24 00:46:42.286 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d" Jan 24 00:46:42.291860 containerd[1468]: time="2026-01-24T00:46:42.290003441Z" level=info msg="TearDown network for sandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" successfully" Jan 24 00:46:42.300143 containerd[1468]: time="2026-01-24T00:46:42.299330003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:42.300143 containerd[1468]: time="2026-01-24T00:46:42.299412234Z" level=info msg="RemovePodSandbox \"773f4e5452dbf88e5c55c3bc88c90910984739d6ef451a56900201507e98bc9d\" returns successfully" Jan 24 00:46:42.300929 containerd[1468]: time="2026-01-24T00:46:42.300892647Z" level=info msg="StopPodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\"" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.361 [WARNING][5042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8ef6693-42b2-4015-8f31-4aeadd5a6288", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f", Pod:"calico-apiserver-7b6b4f6b7-r8txm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibced8d2b459", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.362 [INFO][5042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.362 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" iface="eth0" netns="" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.362 [INFO][5042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.362 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.413 [INFO][5049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.413 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.413 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.424 [WARNING][5049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.424 [INFO][5049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.426 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.434040 containerd[1468]: 2026-01-24 00:46:42.428 [INFO][5042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.436035 containerd[1468]: time="2026-01-24T00:46:42.434124268Z" level=info msg="TearDown network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" successfully" Jan 24 00:46:42.436035 containerd[1468]: time="2026-01-24T00:46:42.434311304Z" level=info msg="StopPodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" returns successfully" Jan 24 00:46:42.436035 containerd[1468]: time="2026-01-24T00:46:42.435461444Z" level=info msg="RemovePodSandbox for \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\"" Jan 24 00:46:42.436035 containerd[1468]: time="2026-01-24T00:46:42.435500988Z" level=info msg="Forcibly stopping sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\"" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.521 [WARNING][5063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8ef6693-42b2-4015-8f31-4aeadd5a6288", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"b78392c93c2947ef78932f0441e4d8b872db152f6f3d897bf3d2d980c50bad1f", Pod:"calico-apiserver-7b6b4f6b7-r8txm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibced8d2b459", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.521 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.521 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" iface="eth0" netns="" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.521 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.521 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.556 [INFO][5070] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.556 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.556 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.566 [WARNING][5070] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.566 [INFO][5070] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" HandleID="k8s-pod-network.46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--r8txm-eth0" Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.568 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.571947 containerd[1468]: 2026-01-24 00:46:42.570 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0" Jan 24 00:46:42.574080 containerd[1468]: time="2026-01-24T00:46:42.572953200Z" level=info msg="TearDown network for sandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" successfully" Jan 24 00:46:42.577700 containerd[1468]: time="2026-01-24T00:46:42.577654913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:42.577861 containerd[1468]: time="2026-01-24T00:46:42.577748375Z" level=info msg="RemovePodSandbox \"46104182a458fae9a598f1f4d9244f09604d3b01a571ee819a5eb1e018df21a0\" returns successfully" Jan 24 00:46:42.578525 containerd[1468]: time="2026-01-24T00:46:42.578365416Z" level=info msg="StopPodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\"" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.625 [WARNING][5084] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.625 [INFO][5084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.625 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" iface="eth0" netns="" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.625 [INFO][5084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.625 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.652 [INFO][5091] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.653 [INFO][5091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.653 [INFO][5091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.661 [WARNING][5091] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.661 [INFO][5091] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.663 [INFO][5091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.667100 containerd[1468]: 2026-01-24 00:46:42.665 [INFO][5084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.668128 containerd[1468]: time="2026-01-24T00:46:42.667154411Z" level=info msg="TearDown network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" successfully" Jan 24 00:46:42.668128 containerd[1468]: time="2026-01-24T00:46:42.667216712Z" level=info msg="StopPodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" returns successfully" Jan 24 00:46:42.668362 containerd[1468]: time="2026-01-24T00:46:42.668307916Z" level=info msg="RemovePodSandbox for \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\"" Jan 24 00:46:42.668362 containerd[1468]: time="2026-01-24T00:46:42.668350511Z" level=info msg="Forcibly stopping sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\"" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.713 [WARNING][5106] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" WorkloadEndpoint="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.713 [INFO][5106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.713 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" iface="eth0" netns="" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.713 [INFO][5106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.713 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.740 [INFO][5113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.740 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.740 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.748 [WARNING][5113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.749 [INFO][5113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" HandleID="k8s-pod-network.67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-whisker--66496bc8f5--xm522-eth0" Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.750 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.753786 containerd[1468]: 2026-01-24 00:46:42.752 [INFO][5106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7" Jan 24 00:46:42.754560 containerd[1468]: time="2026-01-24T00:46:42.753836337Z" level=info msg="TearDown network for sandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" successfully" Jan 24 00:46:42.758454 containerd[1468]: time="2026-01-24T00:46:42.758393713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:42.758682 containerd[1468]: time="2026-01-24T00:46:42.758474604Z" level=info msg="RemovePodSandbox \"67d5942217476f958f6b2e75f1630d9f8750442ddee10cdf6964e44b87f458b7\" returns successfully" Jan 24 00:46:42.759512 containerd[1468]: time="2026-01-24T00:46:42.759097710Z" level=info msg="StopPodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\"" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.813 [WARNING][5127] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10921630-8357-43e3-be35-29668acdc0c4", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71", Pod:"goldmane-666569f655-pshxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali153317b0240", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.814 [INFO][5127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.814 [INFO][5127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" iface="eth0" netns="" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.814 [INFO][5127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.814 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.843 [INFO][5135] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.843 [INFO][5135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.843 [INFO][5135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.854 [WARNING][5135] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.854 [INFO][5135] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.856 [INFO][5135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.859445 containerd[1468]: 2026-01-24 00:46:42.857 [INFO][5127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.860257 containerd[1468]: time="2026-01-24T00:46:42.859424681Z" level=info msg="TearDown network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" successfully" Jan 24 00:46:42.860257 containerd[1468]: time="2026-01-24T00:46:42.859754343Z" level=info msg="StopPodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" returns successfully" Jan 24 00:46:42.862713 containerd[1468]: time="2026-01-24T00:46:42.861870796Z" level=info msg="RemovePodSandbox for \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\"" Jan 24 00:46:42.862713 containerd[1468]: time="2026-01-24T00:46:42.861919847Z" level=info msg="Forcibly stopping sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\"" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.913 [WARNING][5149] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10921630-8357-43e3-be35-29668acdc0c4", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"38cf44059e9d29579ace776ef8e3ba4ca373dad50067c3a1cb48aafb32872b71", Pod:"goldmane-666569f655-pshxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali153317b0240", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.913 [INFO][5149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.913 [INFO][5149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" iface="eth0" netns="" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.913 [INFO][5149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.913 [INFO][5149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.939 [INFO][5156] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.939 [INFO][5156] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.939 [INFO][5156] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.948 [WARNING][5156] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.948 [INFO][5156] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" HandleID="k8s-pod-network.97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-goldmane--666569f655--pshxz-eth0" Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.950 [INFO][5156] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:42.953805 containerd[1468]: 2026-01-24 00:46:42.952 [INFO][5149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42" Jan 24 00:46:42.954873 containerd[1468]: time="2026-01-24T00:46:42.953876815Z" level=info msg="TearDown network for sandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" successfully" Jan 24 00:46:42.958430 containerd[1468]: time="2026-01-24T00:46:42.958384590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:42.958568 containerd[1468]: time="2026-01-24T00:46:42.958466103Z" level=info msg="RemovePodSandbox \"97b0b24aa248714f349f96f99eaac68b143f52158c3fefe12b40c5e1dcd44e42\" returns successfully" Jan 24 00:46:42.959163 containerd[1468]: time="2026-01-24T00:46:42.959125566Z" level=info msg="StopPodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\"" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.005 [WARNING][5170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0", GenerateName:"calico-kube-controllers-d8cb4b6c4-", Namespace:"calico-system", SelfLink:"", UID:"8d0e50ad-79cd-460b-a113-c524281d7733", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8cb4b6c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c", Pod:"calico-kube-controllers-d8cb4b6c4-b426z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08e88f6d844", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.005 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.005 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" iface="eth0" netns="" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.005 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.005 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.037 [INFO][5177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.037 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.037 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.046 [WARNING][5177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.046 [INFO][5177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.048 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.051845 containerd[1468]: 2026-01-24 00:46:43.050 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.053735 containerd[1468]: time="2026-01-24T00:46:43.051892127Z" level=info msg="TearDown network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" successfully" Jan 24 00:46:43.053735 containerd[1468]: time="2026-01-24T00:46:43.051926870Z" level=info msg="StopPodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" returns successfully" Jan 24 00:46:43.053735 containerd[1468]: time="2026-01-24T00:46:43.052573269Z" level=info msg="RemovePodSandbox for \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\"" Jan 24 00:46:43.053735 containerd[1468]: time="2026-01-24T00:46:43.052620092Z" level=info msg="Forcibly stopping sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\"" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.098 [WARNING][5191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0", GenerateName:"calico-kube-controllers-d8cb4b6c4-", Namespace:"calico-system", SelfLink:"", UID:"8d0e50ad-79cd-460b-a113-c524281d7733", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8cb4b6c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"4b680d8ca1092bffe2f09ec9dd5bcf4492ace9519dfd49b50afa3dbc0ea2192c", Pod:"calico-kube-controllers-d8cb4b6c4-b426z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08e88f6d844", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.098 [INFO][5191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.098 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" iface="eth0" netns="" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.098 [INFO][5191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.098 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.124 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.124 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.125 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.133 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.133 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" HandleID="k8s-pod-network.fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--kube--controllers--d8cb4b6c4--b426z-eth0" Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.135 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.139019 containerd[1468]: 2026-01-24 00:46:43.137 [INFO][5191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be" Jan 24 00:46:43.139019 containerd[1468]: time="2026-01-24T00:46:43.138959952Z" level=info msg="TearDown network for sandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" successfully" Jan 24 00:46:43.145884 containerd[1468]: time="2026-01-24T00:46:43.145799151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:43.146125 containerd[1468]: time="2026-01-24T00:46:43.145904824Z" level=info msg="RemovePodSandbox \"fae30a05115a865ad4fc6e248567f8a489e1abae6b1aa0537442665a609eb6be\" returns successfully" Jan 24 00:46:43.146846 containerd[1468]: time="2026-01-24T00:46:43.146801240Z" level=info msg="StopPodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\"" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.194 [WARNING][5212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb162143-7b21-40da-95af-2a95960643a6", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1", Pod:"csi-node-driver-k95gq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia1bf7bfa6c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.195 [INFO][5212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.195 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" iface="eth0" netns="" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.195 [INFO][5212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.195 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.226 [INFO][5220] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.226 [INFO][5220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.226 [INFO][5220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.236 [WARNING][5220] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.236 [INFO][5220] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.238 [INFO][5220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.241876 containerd[1468]: 2026-01-24 00:46:43.240 [INFO][5212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.243376 containerd[1468]: time="2026-01-24T00:46:43.241932998Z" level=info msg="TearDown network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" successfully" Jan 24 00:46:43.243376 containerd[1468]: time="2026-01-24T00:46:43.241966458Z" level=info msg="StopPodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" returns successfully" Jan 24 00:46:43.244358 containerd[1468]: time="2026-01-24T00:46:43.243664665Z" level=info msg="RemovePodSandbox for \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\"" Jan 24 00:46:43.244358 containerd[1468]: time="2026-01-24T00:46:43.243705419Z" level=info msg="Forcibly stopping sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\"" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.290 [WARNING][5235] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb162143-7b21-40da-95af-2a95960643a6", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"82866238dea201754ecacd37dba2741727b55339561dd37fcf2ff4bfec71c6d1", Pod:"csi-node-driver-k95gq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia1bf7bfa6c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.291 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.291 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" iface="eth0" netns="" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.291 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.291 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.318 [INFO][5242] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.318 [INFO][5242] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.318 [INFO][5242] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.327 [WARNING][5242] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.327 [INFO][5242] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" HandleID="k8s-pod-network.c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-csi--node--driver--k95gq-eth0" Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.330 [INFO][5242] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.333201 containerd[1468]: 2026-01-24 00:46:43.331 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5" Jan 24 00:46:43.334550 containerd[1468]: time="2026-01-24T00:46:43.333229215Z" level=info msg="TearDown network for sandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" successfully" Jan 24 00:46:43.338234 containerd[1468]: time="2026-01-24T00:46:43.338116670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:43.338234 containerd[1468]: time="2026-01-24T00:46:43.338215866Z" level=info msg="RemovePodSandbox \"c5c5ecdf929087150892dcf3d10924c45ad2517032d22523076c3a71b6b88cd5\" returns successfully" Jan 24 00:46:43.338832 containerd[1468]: time="2026-01-24T00:46:43.338782265Z" level=info msg="StopPodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\"" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.386 [WARNING][5256] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"35a4077d-452d-4ef7-8393-2463352fe219", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1", Pod:"calico-apiserver-7b6b4f6b7-w9z5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali543549507b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.386 [INFO][5256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.387 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" iface="eth0" netns="" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.387 [INFO][5256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.387 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.427 [INFO][5263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.427 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.427 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.436 [WARNING][5263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.436 [INFO][5263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.438 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.443299 containerd[1468]: 2026-01-24 00:46:43.439 [INFO][5256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.443299 containerd[1468]: time="2026-01-24T00:46:43.441331248Z" level=info msg="TearDown network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" successfully" Jan 24 00:46:43.443299 containerd[1468]: time="2026-01-24T00:46:43.441390125Z" level=info msg="StopPodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" returns successfully" Jan 24 00:46:43.443299 containerd[1468]: time="2026-01-24T00:46:43.442272702Z" level=info msg="RemovePodSandbox for \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\"" Jan 24 00:46:43.443299 containerd[1468]: time="2026-01-24T00:46:43.442314524Z" level=info msg="Forcibly stopping sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\"" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.494 [WARNING][5277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0", GenerateName:"calico-apiserver-7b6b4f6b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"35a4077d-452d-4ef7-8393-2463352fe219", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6b4f6b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"877f72db0683d2ddb0396bec9512cada1244dd187124d34821d4619fc34294a1", Pod:"calico-apiserver-7b6b4f6b7-w9z5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali543549507b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.494 [INFO][5277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.494 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" iface="eth0" netns="" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.494 [INFO][5277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.494 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.536 [INFO][5284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.537 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.537 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.554 [WARNING][5284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.554 [INFO][5284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" HandleID="k8s-pod-network.156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-calico--apiserver--7b6b4f6b7--w9z5c-eth0" Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.556 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.560275 containerd[1468]: 2026-01-24 00:46:43.558 [INFO][5277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162" Jan 24 00:46:43.561485 containerd[1468]: time="2026-01-24T00:46:43.560342485Z" level=info msg="TearDown network for sandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" successfully" Jan 24 00:46:43.564875 containerd[1468]: time="2026-01-24T00:46:43.564818493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:43.565068 containerd[1468]: time="2026-01-24T00:46:43.564893219Z" level=info msg="RemovePodSandbox \"156d5a4a0492733069ebf06e4f5d3a86e76ff02cbac425ae00eb9ccbb34e6162\" returns successfully" Jan 24 00:46:43.565484 containerd[1468]: time="2026-01-24T00:46:43.565452336Z" level=info msg="StopPodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\"" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.611 [WARNING][5298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc52092c-b025-48a8-bd58-59cac1d3f427", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c", Pod:"coredns-668d6bf9bc-8ln7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbceb4083d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.611 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.611 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" iface="eth0" netns="" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.611 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.611 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.638 [INFO][5306] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.639 [INFO][5306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.639 [INFO][5306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.649 [WARNING][5306] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.649 [INFO][5306] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.651 [INFO][5306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.654628 containerd[1468]: 2026-01-24 00:46:43.652 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.654628 containerd[1468]: time="2026-01-24T00:46:43.654453893Z" level=info msg="TearDown network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" successfully" Jan 24 00:46:43.654628 containerd[1468]: time="2026-01-24T00:46:43.654487286Z" level=info msg="StopPodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" returns successfully" Jan 24 00:46:43.655879 containerd[1468]: time="2026-01-24T00:46:43.655144390Z" level=info msg="RemovePodSandbox for \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\"" Jan 24 00:46:43.655879 containerd[1468]: time="2026-01-24T00:46:43.655209310Z" level=info msg="Forcibly stopping sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\"" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.700 [WARNING][5320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc52092c-b025-48a8-bd58-59cac1d3f427", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260123-2100-7c516651e94b9dd9ff58", ContainerID:"714b3c9f4024bde48e7f77c052f3d7fd77a0c3c46a79cce3d86f5b8567b5e24c", Pod:"coredns-668d6bf9bc-8ln7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbceb4083d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.701 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.701 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" iface="eth0" netns="" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.701 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.701 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.731 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.731 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.732 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.740 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.740 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" HandleID="k8s-pod-network.f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Workload="ci--4081--3--6--nightly--20260123--2100--7c516651e94b9dd9ff58-k8s-coredns--668d6bf9bc--8ln7b-eth0" Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.742 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:46:43.745694 containerd[1468]: 2026-01-24 00:46:43.743 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68" Jan 24 00:46:43.747785 containerd[1468]: time="2026-01-24T00:46:43.745592584Z" level=info msg="TearDown network for sandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" successfully" Jan 24 00:46:43.751922 containerd[1468]: time="2026-01-24T00:46:43.751872604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:46:43.752044 containerd[1468]: time="2026-01-24T00:46:43.751959134Z" level=info msg="RemovePodSandbox \"f0f3bd1ec19923fe62868d04abab573ca096cbab9ed2b95a4369eb3dbd792b68\" returns successfully" Jan 24 00:46:44.089941 containerd[1468]: time="2026-01-24T00:46:44.089891821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:46:44.260458 containerd[1468]: time="2026-01-24T00:46:44.260377193Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:44.261779 containerd[1468]: time="2026-01-24T00:46:44.261712044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:46:44.262038 containerd[1468]: time="2026-01-24T00:46:44.261751211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:46:44.262099 kubelet[2637]: E0124 00:46:44.262007 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:44.262099 kubelet[2637]: E0124 00:46:44.262079 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:44.262635 kubelet[2637]: E0124 00:46:44.262306 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc42n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d8cb4b6c4-b426z_calico-system(8d0e50ad-79cd-460b-a113-c524281d7733): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:44.265267 kubelet[2637]: E0124 00:46:44.264005 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:45.089231 containerd[1468]: time="2026-01-24T00:46:45.088740806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:45.250538 containerd[1468]: time="2026-01-24T00:46:45.250459440Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:45.252024 containerd[1468]: time="2026-01-24T00:46:45.251961471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:45.252140 containerd[1468]: time="2026-01-24T00:46:45.252074859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:45.252383 kubelet[2637]: E0124 00:46:45.252304 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:45.252383 kubelet[2637]: E0124 00:46:45.252374 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:45.252921 kubelet[2637]: E0124 00:46:45.252738 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b6b4f6b7-w9z5c_calico-apiserver(35a4077d-452d-4ef7-8393-2463352fe219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:45.253102 containerd[1468]: time="2026-01-24T00:46:45.252947591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:46:45.254073 kubelet[2637]: E0124 00:46:45.254008 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:45.412949 containerd[1468]: time="2026-01-24T00:46:45.412762309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:45.414553 containerd[1468]: time="2026-01-24T00:46:45.414496933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:46:45.414757 containerd[1468]: time="2026-01-24T00:46:45.414528729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:45.414977 kubelet[2637]: E0124 00:46:45.414836 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:45.414977 kubelet[2637]: E0124 00:46:45.414897 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:45.415648 kubelet[2637]: E0124 00:46:45.415101 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pshxz_calico-system(10921630-8357-43e3-be35-29668acdc0c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:45.416543 kubelet[2637]: E0124 00:46:45.416460 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:46:47.087832 containerd[1468]: time="2026-01-24T00:46:47.087750517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:47.253921 containerd[1468]: time="2026-01-24T00:46:47.253841913Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:47.255657 containerd[1468]: time="2026-01-24T00:46:47.255507204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:47.255657 containerd[1468]: time="2026-01-24T00:46:47.255560392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:47.255923 kubelet[2637]: E0124 00:46:47.255828 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:47.255923 kubelet[2637]: E0124 00:46:47.255910 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:47.257240 kubelet[2637]: E0124 00:46:47.256091 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b6b4f6b7-r8txm_calico-apiserver(c8ef6693-42b2-4015-8f31-4aeadd5a6288): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:47.258300 kubelet[2637]: E0124 00:46:47.258086 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:46:54.093923 kubelet[2637]: E0124 00:46:54.093721 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:46:54.553471 systemd[1]: Started sshd@7-10.128.0.29:22-4.153.228.146:55512.service - OpenSSH per-connection server daemon (4.153.228.146:55512). Jan 24 00:46:54.785200 sshd[5348]: Accepted publickey for core from 4.153.228.146 port 55512 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:46:54.786162 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:54.792915 systemd-logind[1449]: New session 8 of user core. Jan 24 00:46:54.797439 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:46:55.048786 sshd[5348]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:55.054514 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:46:55.055610 systemd[1]: sshd@7-10.128.0.29:22-4.153.228.146:55512.service: Deactivated successfully. Jan 24 00:46:55.058505 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:46:55.059986 systemd-logind[1449]: Removed session 8. Jan 24 00:46:55.090598 kubelet[2637]: E0124 00:46:55.090402 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:46:56.089806 kubelet[2637]: E0124 00:46:56.089261 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:46:58.090041 kubelet[2637]: E0124 00:46:58.089979 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:46:59.088411 kubelet[2637]: E0124 00:46:59.088114 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:47:00.098562 systemd[1]: Started sshd@8-10.128.0.29:22-4.153.228.146:55520.service - OpenSSH per-connection server daemon (4.153.228.146:55520). Jan 24 00:47:00.328234 sshd[5392]: Accepted publickey for core from 4.153.228.146 port 55520 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:00.329996 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:00.336744 systemd-logind[1449]: New session 9 of user core. Jan 24 00:47:00.343675 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:47:00.575932 sshd[5392]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:00.580750 systemd[1]: sshd@8-10.128.0.29:22-4.153.228.146:55520.service: Deactivated successfully. Jan 24 00:47:00.583806 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:47:00.586416 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:47:00.588636 systemd-logind[1449]: Removed session 9. Jan 24 00:47:02.089878 kubelet[2637]: E0124 00:47:02.089301 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:47:05.622625 systemd[1]: Started sshd@9-10.128.0.29:22-4.153.228.146:43154.service - OpenSSH per-connection server daemon (4.153.228.146:43154). Jan 24 00:47:05.845854 sshd[5406]: Accepted publickey for core from 4.153.228.146 port 43154 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:05.846769 sshd[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:05.853148 systemd-logind[1449]: New session 10 of user core. Jan 24 00:47:05.859429 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:47:06.102552 containerd[1468]: time="2026-01-24T00:47:06.102461986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:47:06.155061 sshd[5406]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:06.159967 systemd[1]: sshd@9-10.128.0.29:22-4.153.228.146:43154.service: Deactivated successfully. Jan 24 00:47:06.163636 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:47:06.166882 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:47:06.169392 systemd-logind[1449]: Removed session 10. Jan 24 00:47:06.204781 systemd[1]: Started sshd@10-10.128.0.29:22-4.153.228.146:43164.service - OpenSSH per-connection server daemon (4.153.228.146:43164). Jan 24 00:47:06.267432 containerd[1468]: time="2026-01-24T00:47:06.267352660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:06.269081 containerd[1468]: time="2026-01-24T00:47:06.269015876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:47:06.269345 containerd[1468]: time="2026-01-24T00:47:06.269048792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:47:06.269410 kubelet[2637]: E0124 00:47:06.269332 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:47:06.269410 kubelet[2637]: E0124 00:47:06.269398 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:47:06.269977 kubelet[2637]: E0124 00:47:06.269557 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5ce10c0eba4c41889cfe691f8779a69a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68bfccf66c-47vl8_calico-system(45bea70f-544b-4bae-b0e7-4aaa5e7a4a02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:06.273008 containerd[1468]: time="2026-01-24T00:47:06.272621344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:47:06.431897 sshd[5420]: Accepted publickey for core from 4.153.228.146 port 43164 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:06.434579 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:06.436098 containerd[1468]: time="2026-01-24T00:47:06.435858951Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:06.437611 containerd[1468]: time="2026-01-24T00:47:06.437553017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:47:06.437719 containerd[1468]: time="2026-01-24T00:47:06.437669788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:47:06.438720 kubelet[2637]: E0124 00:47:06.438156 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:47:06.438720 kubelet[2637]: E0124 00:47:06.438288 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:47:06.438720 kubelet[2637]: E0124 00:47:06.438490 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68bfccf66c-47vl8_calico-system(45bea70f-544b-4bae-b0e7-4aaa5e7a4a02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:06.442550 kubelet[2637]: E0124 00:47:06.442492 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:47:06.443518 systemd-logind[1449]: New session 11 of user core. Jan 24 00:47:06.449482 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:47:06.724507 sshd[5420]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:06.735739 systemd[1]: sshd@10-10.128.0.29:22-4.153.228.146:43164.service: Deactivated successfully. Jan 24 00:47:06.741152 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:47:06.745955 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:47:06.748963 systemd-logind[1449]: Removed session 11. Jan 24 00:47:06.769646 systemd[1]: Started sshd@11-10.128.0.29:22-4.153.228.146:43168.service - OpenSSH per-connection server daemon (4.153.228.146:43168). Jan 24 00:47:06.990308 sshd[5431]: Accepted publickey for core from 4.153.228.146 port 43168 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:06.992876 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:07.004429 systemd-logind[1449]: New session 12 of user core. Jan 24 00:47:07.010434 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:47:07.300518 sshd[5431]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:07.306108 systemd[1]: sshd@11-10.128.0.29:22-4.153.228.146:43168.service: Deactivated successfully. Jan 24 00:47:07.306485 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:47:07.311318 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:47:07.313824 systemd-logind[1449]: Removed session 12. Jan 24 00:47:08.097486 containerd[1468]: time="2026-01-24T00:47:08.097024864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:47:08.258135 containerd[1468]: time="2026-01-24T00:47:08.258062926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:08.261209 containerd[1468]: time="2026-01-24T00:47:08.259741235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:47:08.261209 containerd[1468]: time="2026-01-24T00:47:08.259859276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:47:08.261443 kubelet[2637]: E0124 00:47:08.260201 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:08.261443 kubelet[2637]: E0124 00:47:08.260281 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:08.261443 kubelet[2637]: E0124 00:47:08.260658 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b6b4f6b7-w9z5c_calico-apiserver(35a4077d-452d-4ef7-8393-2463352fe219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:08.262498 containerd[1468]: time="2026-01-24T00:47:08.262456108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:47:08.262973 kubelet[2637]: E0124 00:47:08.262900 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:47:08.426228 containerd[1468]: time="2026-01-24T00:47:08.425390832Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:08.427114 containerd[1468]: time="2026-01-24T00:47:08.427018593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:47:08.427268 containerd[1468]: time="2026-01-24T00:47:08.427056471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:47:08.427544 kubelet[2637]: E0124 00:47:08.427492 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:47:08.427741 kubelet[2637]: E0124 00:47:08.427564 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:47:08.428295 kubelet[2637]: E0124 00:47:08.428230 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:08.431121 containerd[1468]: time="2026-01-24T00:47:08.431080251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:47:08.591798 containerd[1468]: time="2026-01-24T00:47:08.591711263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:08.594527 containerd[1468]: time="2026-01-24T00:47:08.593518673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:47:08.594527 containerd[1468]: time="2026-01-24T00:47:08.594343407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:47:08.594928 kubelet[2637]: E0124 00:47:08.594776 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:47:08.594928 kubelet[2637]: E0124 00:47:08.594842 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:47:08.595587 kubelet[2637]: E0124 00:47:08.595015 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-k95gq_calico-system(eb162143-7b21-40da-95af-2a95960643a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:08.597121 kubelet[2637]: E0124 00:47:08.596999 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:47:10.089027 containerd[1468]: time="2026-01-24T00:47:10.088806393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:47:10.264427 containerd[1468]: time="2026-01-24T00:47:10.264347889Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:10.265938 containerd[1468]: time="2026-01-24T00:47:10.265873215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:47:10.266594 containerd[1468]: time="2026-01-24T00:47:10.266000648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:47:10.266691 kubelet[2637]: E0124 00:47:10.266219 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:47:10.266691 kubelet[2637]: E0124 00:47:10.266275 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:47:10.266691 kubelet[2637]: E0124 00:47:10.266558 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc42n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d8cb4b6c4-b426z_calico-system(8d0e50ad-79cd-460b-a113-c524281d7733): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:10.268710 kubelet[2637]: E0124 00:47:10.267866 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:47:10.268846 containerd[1468]: time="2026-01-24T00:47:10.267449757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:47:10.427544 containerd[1468]: time="2026-01-24T00:47:10.427361597Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:10.429046 containerd[1468]: time="2026-01-24T00:47:10.428988758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:47:10.429284 containerd[1468]: time="2026-01-24T00:47:10.429020882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:47:10.429355 kubelet[2637]: E0124 00:47:10.429303 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:47:10.429422 kubelet[2637]: E0124 00:47:10.429375 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:47:10.429637 kubelet[2637]: E0124 00:47:10.429556 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8wl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pshxz_calico-system(10921630-8357-43e3-be35-29668acdc0c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:10.431313 kubelet[2637]: E0124 00:47:10.431254 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:47:12.343571 systemd[1]: Started sshd@12-10.128.0.29:22-4.153.228.146:43178.service - OpenSSH per-connection server daemon (4.153.228.146:43178). Jan 24 00:47:12.573584 sshd[5444]: Accepted publickey for core from 4.153.228.146 port 43178 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:12.578476 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:12.587933 systemd-logind[1449]: New session 13 of user core. Jan 24 00:47:12.594418 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:47:12.845851 sshd[5444]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:12.852782 systemd[1]: sshd@12-10.128.0.29:22-4.153.228.146:43178.service: Deactivated successfully. Jan 24 00:47:12.860029 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:47:12.864093 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:47:12.866065 systemd-logind[1449]: Removed session 13. Jan 24 00:47:16.089987 containerd[1468]: time="2026-01-24T00:47:16.089855197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:47:16.248152 containerd[1468]: time="2026-01-24T00:47:16.248075476Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:16.250025 containerd[1468]: time="2026-01-24T00:47:16.249954800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:47:16.250253 containerd[1468]: time="2026-01-24T00:47:16.249996941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:47:16.250317 kubelet[2637]: E0124 00:47:16.250254 2637 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:16.250806 kubelet[2637]: E0124 00:47:16.250323 2637 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:16.250806 kubelet[2637]: E0124 00:47:16.250499 2637 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b6b4f6b7-r8txm_calico-apiserver(c8ef6693-42b2-4015-8f31-4aeadd5a6288): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:16.252279 kubelet[2637]: E0124 00:47:16.252169 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:47:17.890608 systemd[1]: Started sshd@13-10.128.0.29:22-4.153.228.146:39990.service - OpenSSH per-connection server daemon (4.153.228.146:39990). Jan 24 00:47:18.116223 sshd[5469]: Accepted publickey for core from 4.153.228.146 port 39990 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:18.118411 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:18.125312 systemd-logind[1449]: New session 14 of user core. Jan 24 00:47:18.131401 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:47:18.367884 sshd[5469]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:18.372730 systemd[1]: sshd@13-10.128.0.29:22-4.153.228.146:39990.service: Deactivated successfully. Jan 24 00:47:18.376319 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:47:18.379402 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:47:18.380920 systemd-logind[1449]: Removed session 14. Jan 24 00:47:21.089991 kubelet[2637]: E0124 00:47:21.089701 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:47:22.092818 kubelet[2637]: E0124 00:47:22.091337 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:47:23.089846 kubelet[2637]: E0124 00:47:23.089754 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:47:23.416611 systemd[1]: Started sshd@14-10.128.0.29:22-4.153.228.146:39998.service - OpenSSH per-connection server daemon (4.153.228.146:39998). Jan 24 00:47:23.651227 sshd[5484]: Accepted publickey for core from 4.153.228.146 port 39998 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:23.653260 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:23.661149 systemd-logind[1449]: New session 15 of user core. Jan 24 00:47:23.663470 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:47:23.904737 sshd[5484]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:23.910537 systemd[1]: sshd@14-10.128.0.29:22-4.153.228.146:39998.service: Deactivated successfully. Jan 24 00:47:23.913997 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:47:23.915782 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:47:23.917384 systemd-logind[1449]: Removed session 15. Jan 24 00:47:24.090235 kubelet[2637]: E0124 00:47:24.089751 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:47:25.088658 kubelet[2637]: E0124 00:47:25.088050 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:47:28.949590 systemd[1]: Started sshd@15-10.128.0.29:22-4.153.228.146:55700.service - OpenSSH per-connection server daemon (4.153.228.146:55700). Jan 24 00:47:29.169043 sshd[5519]: Accepted publickey for core from 4.153.228.146 port 55700 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:29.171026 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:29.177286 systemd-logind[1449]: New session 16 of user core. Jan 24 00:47:29.182395 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:47:29.415257 sshd[5519]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:29.421075 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:47:29.422523 systemd[1]: sshd@15-10.128.0.29:22-4.153.228.146:55700.service: Deactivated successfully. Jan 24 00:47:29.425952 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:47:29.427872 systemd-logind[1449]: Removed session 16. Jan 24 00:47:29.470602 systemd[1]: Started sshd@16-10.128.0.29:22-4.153.228.146:55716.service - OpenSSH per-connection server daemon (4.153.228.146:55716). Jan 24 00:47:29.711021 sshd[5532]: Accepted publickey for core from 4.153.228.146 port 55716 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:29.713102 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:29.720469 systemd-logind[1449]: New session 17 of user core. Jan 24 00:47:29.725443 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:47:30.030487 sshd[5532]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:30.035373 systemd[1]: sshd@16-10.128.0.29:22-4.153.228.146:55716.service: Deactivated successfully. Jan 24 00:47:30.038408 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:47:30.040600 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:47:30.043013 systemd-logind[1449]: Removed session 17. Jan 24 00:47:30.078640 systemd[1]: Started sshd@17-10.128.0.29:22-4.153.228.146:55720.service - OpenSSH per-connection server daemon (4.153.228.146:55720). Jan 24 00:47:30.301649 sshd[5543]: Accepted publickey for core from 4.153.228.146 port 55720 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:30.303919 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:30.312618 systemd-logind[1449]: New session 18 of user core. Jan 24 00:47:30.320462 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:47:31.137947 sshd[5543]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:31.143788 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:47:31.145558 systemd[1]: sshd@17-10.128.0.29:22-4.153.228.146:55720.service: Deactivated successfully. Jan 24 00:47:31.151699 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:47:31.158285 systemd-logind[1449]: Removed session 18. Jan 24 00:47:31.187684 systemd[1]: Started sshd@18-10.128.0.29:22-4.153.228.146:55734.service - OpenSSH per-connection server daemon (4.153.228.146:55734). Jan 24 00:47:31.436602 sshd[5561]: Accepted publickey for core from 4.153.228.146 port 55734 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:31.438705 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:31.444454 systemd-logind[1449]: New session 19 of user core. Jan 24 00:47:31.450419 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:47:31.890088 sshd[5561]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:31.895795 systemd[1]: sshd@18-10.128.0.29:22-4.153.228.146:55734.service: Deactivated successfully. Jan 24 00:47:31.900060 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:47:31.901311 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:47:31.903200 systemd-logind[1449]: Removed session 19. Jan 24 00:47:31.937586 systemd[1]: Started sshd@19-10.128.0.29:22-4.153.228.146:55748.service - OpenSSH per-connection server daemon (4.153.228.146:55748). Jan 24 00:47:32.089052 kubelet[2637]: E0124 00:47:32.088308 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:47:32.180294 sshd[5572]: Accepted publickey for core from 4.153.228.146 port 55748 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:32.182871 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:32.188579 systemd-logind[1449]: New session 20 of user core. Jan 24 00:47:32.193382 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:47:32.430560 sshd[5572]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:32.437018 systemd[1]: sshd@19-10.128.0.29:22-4.153.228.146:55748.service: Deactivated successfully. Jan 24 00:47:32.440754 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:47:32.442059 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:47:32.443766 systemd-logind[1449]: Removed session 20. Jan 24 00:47:33.089380 kubelet[2637]: E0124 00:47:33.089247 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:47:36.091029 kubelet[2637]: E0124 00:47:36.090840 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:47:37.089337 kubelet[2637]: E0124 00:47:37.089059 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:47:37.091465 kubelet[2637]: E0124 00:47:37.091282 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:47:37.481644 systemd[1]: Started sshd@20-10.128.0.29:22-4.153.228.146:36118.service - OpenSSH per-connection server daemon (4.153.228.146:36118). Jan 24 00:47:37.702750 sshd[5585]: Accepted publickey for core from 4.153.228.146 port 36118 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:37.704723 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:37.711854 systemd-logind[1449]: New session 21 of user core. Jan 24 00:47:37.714435 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:47:37.948398 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:37.953121 systemd[1]: sshd@20-10.128.0.29:22-4.153.228.146:36118.service: Deactivated successfully. Jan 24 00:47:37.956430 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:47:37.959205 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:47:37.960931 systemd-logind[1449]: Removed session 21. Jan 24 00:47:38.088900 kubelet[2637]: E0124 00:47:38.088594 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:47:42.994985 systemd[1]: Started sshd@21-10.128.0.29:22-4.153.228.146:36130.service - OpenSSH per-connection server daemon (4.153.228.146:36130). Jan 24 00:47:43.218116 sshd[5603]: Accepted publickey for core from 4.153.228.146 port 36130 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:43.220052 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:43.226052 systemd-logind[1449]: New session 22 of user core. Jan 24 00:47:43.231402 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:47:43.466589 sshd[5603]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:43.472151 systemd[1]: sshd@21-10.128.0.29:22-4.153.228.146:36130.service: Deactivated successfully. Jan 24 00:47:43.475671 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:47:43.477049 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:47:43.478651 systemd-logind[1449]: Removed session 22. Jan 24 00:47:45.089068 kubelet[2637]: E0124 00:47:45.088949 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68bfccf66c-47vl8" podUID="45bea70f-544b-4bae-b0e7-4aaa5e7a4a02" Jan 24 00:47:47.088573 kubelet[2637]: E0124 00:47:47.088516 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-r8txm" podUID="c8ef6693-42b2-4015-8f31-4aeadd5a6288" Jan 24 00:47:48.090573 kubelet[2637]: E0124 00:47:48.090484 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k95gq" podUID="eb162143-7b21-40da-95af-2a95960643a6" Jan 24 00:47:48.091357 kubelet[2637]: E0124 00:47:48.090655 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6b4f6b7-w9z5c" podUID="35a4077d-452d-4ef7-8393-2463352fe219" Jan 24 00:47:48.511636 systemd[1]: Started sshd@22-10.128.0.29:22-4.153.228.146:58636.service - OpenSSH per-connection server daemon (4.153.228.146:58636). Jan 24 00:47:48.736023 sshd[5616]: Accepted publickey for core from 4.153.228.146 port 58636 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:48.738287 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:48.748025 systemd-logind[1449]: New session 23 of user core. Jan 24 00:47:48.755778 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:47:49.033549 sshd[5616]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:49.041622 systemd[1]: sshd@22-10.128.0.29:22-4.153.228.146:58636.service: Deactivated successfully. Jan 24 00:47:49.042042 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:47:49.047896 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:47:49.053702 systemd-logind[1449]: Removed session 23. Jan 24 00:47:50.090497 kubelet[2637]: E0124 00:47:50.090018 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d8cb4b6c4-b426z" podUID="8d0e50ad-79cd-460b-a113-c524281d7733" Jan 24 00:47:50.091038 kubelet[2637]: E0124 00:47:50.090617 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pshxz" podUID="10921630-8357-43e3-be35-29668acdc0c4" Jan 24 00:47:54.086323 systemd[1]: Started sshd@23-10.128.0.29:22-4.153.228.146:58642.service - OpenSSH per-connection server daemon (4.153.228.146:58642). Jan 24 00:47:54.327759 sshd[5632]: Accepted publickey for core from 4.153.228.146 port 58642 ssh2: RSA SHA256:p4bIjnLfccu64VVnSspQhbGoZQkx/PAas7r/ShXxCNo Jan 24 00:47:54.330788 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:54.339292 systemd-logind[1449]: New session 24 of user core. Jan 24 00:47:54.348667 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:47:54.630603 sshd[5632]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:54.640887 systemd[1]: sshd@23-10.128.0.29:22-4.153.228.146:58642.service: Deactivated successfully. Jan 24 00:47:54.645226 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:47:54.647820 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:47:54.651083 systemd-logind[1449]: Removed session 24.