Jan 23 00:57:19.130403 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 00:57:19.130450 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:57:19.130476 kernel: BIOS-provided physical RAM map: Jan 23 00:57:19.130492 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 23 00:57:19.130507 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 23 00:57:19.130536 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 23 00:57:19.130556 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 23 00:57:19.130572 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 23 00:57:19.130588 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Jan 23 00:57:19.130609 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Jan 23 00:57:19.130626 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Jan 23 00:57:19.130641 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 23 00:57:19.130655 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 23 00:57:19.130671 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 23 00:57:19.130691 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 23 00:57:19.130713 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 23 00:57:19.130729 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 23 00:57:19.130745 kernel: NX (Execute Disable) protection: active Jan 23 00:57:19.130762 kernel: APIC: Static calls initialized Jan 23 00:57:19.130777 kernel: efi: EFI v2.7 by EDK II Jan 23 00:57:19.130794 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Jan 23 00:57:19.130808 kernel: random: crng init done Jan 23 00:57:19.130823 kernel: secureboot: Secure boot disabled Jan 23 00:57:19.130839 kernel: SMBIOS 2.4 present. Jan 23 00:57:19.130855 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 23 00:57:19.130896 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:57:19.130913 kernel: Hypervisor detected: KVM Jan 23 00:57:19.130931 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 23 00:57:19.130949 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 00:57:19.130966 kernel: kvm-clock: using sched offset of 15156301438 cycles Jan 23 00:57:19.130985 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:57:19.131004 kernel: tsc: Detected 2299.998 MHz processor Jan 23 00:57:19.131022 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 00:57:19.131041 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 00:57:19.131077 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 23 00:57:19.131102 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 23 00:57:19.131118 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 00:57:19.131134 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 23 00:57:19.131150 kernel: Using GB pages for direct mapping Jan 23 00:57:19.131168 kernel: ACPI: Early table checksum verification disabled Jan 23 00:57:19.131203 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 23 00:57:19.131227 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 23 00:57:19.131255 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 23 00:57:19.131272 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 23 00:57:19.131290 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 23 00:57:19.131307 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 23 00:57:19.131323 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 23 00:57:19.131339 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 23 00:57:19.131357 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 23 00:57:19.131382 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 23 00:57:19.131398 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 23 00:57:19.131415 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 23 00:57:19.131432 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 23 00:57:19.131449 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 23 00:57:19.131468 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 23 00:57:19.131485 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 23 00:57:19.131503 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 23 00:57:19.131520 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 23 00:57:19.131546 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 23 00:57:19.131566 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 23 00:57:19.131585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 00:57:19.131605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 23 00:57:19.131624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 23 00:57:19.131644 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jan 23 00:57:19.131663 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jan 23 00:57:19.131683 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jan 23 00:57:19.131703 kernel: Zone ranges: Jan 23 00:57:19.131728 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 00:57:19.131748 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 00:57:19.131767 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 23 00:57:19.131787 kernel: Device empty Jan 23 00:57:19.131807 kernel: Movable zone start for each node Jan 23 00:57:19.131826 kernel: Early memory node ranges Jan 23 00:57:19.131845 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 23 00:57:19.131865 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 23 00:57:19.133925 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Jan 23 00:57:19.133957 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Jan 23 00:57:19.133976 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 23 00:57:19.133996 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 23 00:57:19.134014 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 23 00:57:19.134033 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 00:57:19.134052 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 23 00:57:19.134070 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 23 00:57:19.134089 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Jan 23 00:57:19.134108 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 00:57:19.134131 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 23 00:57:19.134150 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 00:57:19.134169 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 00:57:19.134197 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 00:57:19.134216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 00:57:19.134234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 00:57:19.134253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 00:57:19.134272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 00:57:19.134290 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 00:57:19.134314 kernel: CPU topo: Max. logical packages: 1 Jan 23 00:57:19.134333 kernel: CPU topo: Max. logical dies: 1 Jan 23 00:57:19.134351 kernel: CPU topo: Max. dies per package: 1 Jan 23 00:57:19.134370 kernel: CPU topo: Max. threads per core: 2 Jan 23 00:57:19.134388 kernel: CPU topo: Num. cores per package: 1 Jan 23 00:57:19.134407 kernel: CPU topo: Num. threads per package: 2 Jan 23 00:57:19.134426 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 00:57:19.134445 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 23 00:57:19.134464 kernel: Booting paravirtualized kernel on KVM Jan 23 00:57:19.134483 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 00:57:19.134507 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 00:57:19.134525 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 00:57:19.134544 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 00:57:19.134562 kernel: pcpu-alloc: [0] 0 1 Jan 23 00:57:19.134580 kernel: kvm-guest: PV spinlocks enabled Jan 23 00:57:19.134599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 00:57:19.134619 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:57:19.134638 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 00:57:19.134661 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:57:19.134680 kernel: Fallback order for Node 0: 0 Jan 23 00:57:19.134698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Jan 23 00:57:19.134727 kernel: Policy zone: Normal Jan 23 00:57:19.134746 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:57:19.134766 kernel: software IO TLB: area num 2. Jan 23 00:57:19.134808 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:57:19.134832 kernel: Kernel/User page tables isolation: enabled Jan 23 00:57:19.134852 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 00:57:19.134873 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 00:57:19.136267 kernel: Dynamic Preempt: voluntary Jan 23 00:57:19.136293 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:57:19.136323 kernel: rcu: RCU event tracing is enabled. Jan 23 00:57:19.136346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:57:19.136367 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:57:19.136388 kernel: Rude variant of Tasks RCU enabled. Jan 23 00:57:19.136407 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:57:19.136432 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:57:19.136453 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:57:19.136474 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:57:19.136495 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:57:19.136517 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:57:19.136538 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 00:57:19.136559 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:57:19.136580 kernel: Console: colour dummy device 80x25 Jan 23 00:57:19.136601 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:57:19.136626 kernel: ACPI: Core revision 20240827 Jan 23 00:57:19.136647 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 00:57:19.136668 kernel: x2apic enabled Jan 23 00:57:19.136689 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 00:57:19.136710 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 23 00:57:19.136731 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 23 00:57:19.136752 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 23 00:57:19.136772 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 23 00:57:19.136793 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 23 00:57:19.136819 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 00:57:19.136847 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 00:57:19.136868 kernel: Spectre V2 : Mitigation: IBRS Jan 23 00:57:19.137927 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 00:57:19.137953 kernel: RETBleed: Mitigation: IBRS Jan 23 00:57:19.137975 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 00:57:19.137996 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 23 00:57:19.138017 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 00:57:19.138045 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 00:57:19.138065 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 00:57:19.138086 kernel: active return thunk: its_return_thunk Jan 23 00:57:19.138107 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 00:57:19.138128 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 00:57:19.138149 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 00:57:19.138170 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 00:57:19.138200 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 00:57:19.138221 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 00:57:19.138247 kernel: Freeing SMP alternatives memory: 32K Jan 23 00:57:19.138267 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:57:19.138289 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:57:19.138310 kernel: landlock: Up and running. Jan 23 00:57:19.138331 kernel: SELinux: Initializing. Jan 23 00:57:19.138351 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 00:57:19.138372 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 00:57:19.138393 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 23 00:57:19.138415 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 23 00:57:19.138440 kernel: signal: max sigframe size: 1776 Jan 23 00:57:19.138461 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:57:19.138484 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:57:19.138505 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:57:19.138526 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 00:57:19.138547 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:57:19.138568 kernel: smpboot: x86: Booting SMP configuration: Jan 23 00:57:19.138589 kernel: .... node #0, CPUs: #1 Jan 23 00:57:19.138611 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 00:57:19.138639 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 00:57:19.138659 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:57:19.138680 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 23 00:57:19.138702 kernel: Memory: 7555812K/7860544K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 298900K reserved, 0K cma-reserved) Jan 23 00:57:19.138723 kernel: devtmpfs: initialized Jan 23 00:57:19.138744 kernel: x86/mm: Memory block size: 128MB Jan 23 00:57:19.138765 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 23 00:57:19.138786 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:57:19.138812 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:57:19.138833 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:57:19.138854 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:57:19.138874 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:57:19.138918 kernel: audit: type=2000 audit(1769129835.314:1): state=initialized audit_enabled=0 res=1 Jan 23 00:57:19.138945 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:57:19.138966 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 00:57:19.138987 kernel: cpuidle: using governor menu Jan 23 00:57:19.139007 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:57:19.139034 kernel: dca service started, version 1.12.1 Jan 23 00:57:19.139055 kernel: PCI: Using configuration type 1 for base access Jan 23 00:57:19.139076 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 00:57:19.139098 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:57:19.139119 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:57:19.139139 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:57:19.139160 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:57:19.139188 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:57:19.139209 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:57:19.139235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:57:19.139256 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 00:57:19.139277 kernel: ACPI: Interpreter enabled Jan 23 00:57:19.139298 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 00:57:19.139318 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 00:57:19.139339 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 00:57:19.139360 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 00:57:19.139381 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 23 00:57:19.139401 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:57:19.139712 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:57:19.142035 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 00:57:19.142314 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 00:57:19.142341 kernel: PCI host bridge to bus 0000:00 Jan 23 00:57:19.142574 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 00:57:19.142790 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 00:57:19.143034 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 00:57:19.143257 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 23 00:57:19.143469 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:57:19.143721 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:57:19.145249 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jan 23 00:57:19.145517 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 23 00:57:19.145755 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 00:57:19.146026 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jan 23 00:57:19.146272 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 00:57:19.146519 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jan 23 00:57:19.146760 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 00:57:19.149055 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jan 23 00:57:19.149315 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jan 23 00:57:19.149570 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 00:57:19.149804 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jan 23 00:57:19.150058 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jan 23 00:57:19.150083 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 00:57:19.150105 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 00:57:19.150126 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 00:57:19.150148 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 00:57:19.150169 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 00:57:19.150206 kernel: iommu: Default domain type: Translated Jan 23 00:57:19.150227 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 00:57:19.150248 kernel: efivars: Registered efivars operations Jan 23 00:57:19.150269 kernel: PCI: Using ACPI for IRQ routing Jan 23 00:57:19.150290 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 00:57:19.150311 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 23 00:57:19.150332 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 23 00:57:19.150353 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Jan 23 00:57:19.150373 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 23 00:57:19.150399 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 23 00:57:19.150420 kernel: vgaarb: loaded Jan 23 00:57:19.150439 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 00:57:19.150461 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:57:19.150481 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:57:19.150502 kernel: pnp: PnP ACPI init Jan 23 00:57:19.150522 kernel: pnp: PnP ACPI: found 7 devices Jan 23 00:57:19.150544 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 00:57:19.150564 kernel: NET: Registered PF_INET protocol family Jan 23 00:57:19.150591 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 00:57:19.150613 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 00:57:19.150634 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:57:19.150655 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:57:19.150675 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 00:57:19.150696 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 00:57:19.150717 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 00:57:19.150738 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 00:57:19.150759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:57:19.150784 kernel: NET: Registered PF_XDP protocol family Jan 23 00:57:19.153789 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 00:57:19.154709 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 00:57:19.154949 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 00:57:19.155161 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 23 00:57:19.155409 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 00:57:19.155436 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:57:19.155466 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 00:57:19.155487 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 23 00:57:19.155508 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 00:57:19.155530 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 23 00:57:19.155551 kernel: clocksource: Switched to clocksource tsc Jan 23 00:57:19.155572 kernel: Initialise system trusted keyrings Jan 23 00:57:19.155592 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 00:57:19.155613 kernel: Key type asymmetric registered Jan 23 00:57:19.155634 kernel: Asymmetric key parser 'x509' registered Jan 23 00:57:19.155660 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 00:57:19.155680 kernel: io scheduler mq-deadline registered Jan 23 00:57:19.155701 kernel: io scheduler kyber registered Jan 23 00:57:19.155722 kernel: io scheduler bfq registered Jan 23 00:57:19.155743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 00:57:19.155765 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 23 00:57:19.158064 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 23 00:57:19.158099 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 23 00:57:19.158364 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 23 00:57:19.158398 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 23 00:57:19.158629 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 23 00:57:19.158655 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:57:19.158676 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 00:57:19.158697 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 00:57:19.158718 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 23 00:57:19.158739 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 23 00:57:19.160628 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 23 00:57:19.160671 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 00:57:19.160692 kernel: i8042: Warning: Keylock active Jan 23 00:57:19.160713 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 00:57:19.160733 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 00:57:19.161021 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 00:57:19.161245 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 00:57:19.161449 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T00:57:18 UTC (1769129838) Jan 23 00:57:19.161651 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 00:57:19.161681 kernel: intel_pstate: CPU model not supported Jan 23 00:57:19.161701 kernel: pstore: Using crash dump compression: deflate Jan 23 00:57:19.161721 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 00:57:19.161741 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:57:19.161760 kernel: Segment Routing with IPv6 Jan 23 00:57:19.161781 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:57:19.161801 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:57:19.161821 kernel: Key type dns_resolver registered Jan 23 00:57:19.161840 kernel: IPI shorthand broadcast: enabled Jan 23 00:57:19.161865 kernel: sched_clock: Marking stable (3467005238, 133422608)->(3617192296, -16764450) Jan 23 00:57:19.161909 kernel: registered taskstats version 1 Jan 23 00:57:19.161928 kernel: Loading compiled-in X.509 certificates Jan 23 00:57:19.161948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 00:57:19.161975 kernel: Demotion targets for Node 0: null Jan 23 00:57:19.161996 kernel: Key type .fscrypt registered Jan 23 00:57:19.162017 kernel: Key type fscrypt-provisioning registered Jan 23 00:57:19.162036 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:57:19.162054 kernel: ima: No architecture policies found Jan 23 00:57:19.162079 kernel: clk: Disabling unused clocks Jan 23 00:57:19.162109 kernel: Warning: unable to open an initial console. Jan 23 00:57:19.162132 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 00:57:19.162153 kernel: Write protecting the kernel read-only data: 40960k Jan 23 00:57:19.162187 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 00:57:19.162210 kernel: Run /init as init process Jan 23 00:57:19.162232 kernel: with arguments: Jan 23 00:57:19.162255 kernel: /init Jan 23 00:57:19.162275 kernel: with environment: Jan 23 00:57:19.162305 kernel: HOME=/ Jan 23 00:57:19.162327 kernel: TERM=linux Jan 23 00:57:19.162349 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 00:57:19.162373 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:57:19.162401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:57:19.162424 systemd[1]: Detected virtualization google. Jan 23 00:57:19.162445 systemd[1]: Detected architecture x86-64. Jan 23 00:57:19.162474 systemd[1]: Running in initrd. Jan 23 00:57:19.162498 systemd[1]: No hostname configured, using default hostname. Jan 23 00:57:19.162520 systemd[1]: Hostname set to . Jan 23 00:57:19.162539 systemd[1]: Initializing machine ID from random generator. Jan 23 00:57:19.162557 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:57:19.162577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:57:19.162620 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:57:19.162649 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:57:19.162668 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:57:19.162688 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:57:19.162710 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:57:19.162732 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:57:19.162753 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:57:19.162780 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:57:19.162801 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:57:19.162823 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:57:19.162845 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:57:19.162866 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:57:19.163934 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:57:19.163962 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:57:19.163987 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:57:19.164018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:57:19.164041 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:57:19.164064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:57:19.164088 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:57:19.164111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:57:19.164134 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:57:19.164157 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:57:19.164191 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:57:19.164214 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:57:19.164243 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:57:19.164266 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:57:19.164289 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:57:19.164312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:57:19.164335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:57:19.164358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:57:19.164391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:57:19.164415 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:57:19.164437 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:57:19.164499 systemd-journald[192]: Collecting audit messages is disabled. Jan 23 00:57:19.164552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:57:19.164576 systemd-journald[192]: Journal started Jan 23 00:57:19.164624 systemd-journald[192]: Runtime Journal (/run/log/journal/f258281149794fe29a19f5eebd83ac92) is 8M, max 148.6M, 140.6M free. Jan 23 00:57:19.131334 systemd-modules-load[193]: Inserted module 'overlay' Jan 23 00:57:19.172125 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:57:19.174082 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:57:19.180119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:57:19.185068 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:57:19.185106 kernel: Bridge firewalling registered Jan 23 00:57:19.185731 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 23 00:57:19.190421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:57:19.201607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:57:19.214317 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:57:19.216082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:57:19.219334 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:57:19.232126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:57:19.243314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:57:19.249071 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:57:19.254590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:57:19.262374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:57:19.271068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:57:19.289398 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:57:19.342719 systemd-resolved[232]: Positive Trust Anchors: Jan 23 00:57:19.343138 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:57:19.343225 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:57:19.348619 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 23 00:57:19.350291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:57:19.365141 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:57:19.421927 kernel: SCSI subsystem initialized Jan 23 00:57:19.433921 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:57:19.446062 kernel: iscsi: registered transport (tcp) Jan 23 00:57:19.470920 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:57:19.470999 kernel: QLogic iSCSI HBA Driver Jan 23 00:57:19.494016 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:57:19.517087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:57:19.525873 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:57:19.591231 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:57:19.598690 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:57:19.662926 kernel: raid6: avx2x4 gen() 18235 MB/s Jan 23 00:57:19.679925 kernel: raid6: avx2x2 gen() 18253 MB/s Jan 23 00:57:19.697300 kernel: raid6: avx2x1 gen() 13921 MB/s Jan 23 00:57:19.697359 kernel: raid6: using algorithm avx2x2 gen() 18253 MB/s Jan 23 00:57:19.715304 kernel: raid6: .... xor() 18529 MB/s, rmw enabled Jan 23 00:57:19.715376 kernel: raid6: using avx2x2 recovery algorithm Jan 23 00:57:19.737928 kernel: xor: automatically using best checksumming function avx Jan 23 00:57:19.920918 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:57:19.930157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:57:19.933612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:57:19.965525 systemd-udevd[441]: Using default interface naming scheme 'v255'. Jan 23 00:57:19.974855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:57:19.978850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:57:20.005649 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Jan 23 00:57:20.041119 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:57:20.048649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:57:20.143589 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:57:20.151379 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:57:20.239349 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jan 23 00:57:20.247907 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 00:57:20.258926 kernel: scsi host0: Virtio SCSI HBA Jan 23 00:57:20.268906 kernel: blk-mq: reduced tag depth to 10240 Jan 23 00:57:20.283920 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 23 00:57:20.323933 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 00:57:20.325940 kernel: AES CTR mode by8 optimization enabled Jan 23 00:57:20.400687 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 23 00:57:20.401148 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 23 00:57:20.404865 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 23 00:57:20.405250 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 23 00:57:20.405521 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 00:57:20.407694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:57:20.408240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:57:20.416140 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:57:20.426007 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:57:20.426048 kernel: GPT:17805311 != 33554431 Jan 23 00:57:20.426084 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:57:20.426110 kernel: GPT:17805311 != 33554431 Jan 23 00:57:20.426133 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:57:20.426166 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:57:20.426190 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 23 00:57:20.422215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:57:20.432685 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:57:20.481000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:57:20.507895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 23 00:57:20.531477 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:57:20.556796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 23 00:57:20.571965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 23 00:57:20.582821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 23 00:57:20.583103 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 23 00:57:20.588287 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:57:20.593209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:57:20.598163 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:57:20.603604 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:57:20.625230 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:57:20.637693 disk-uuid[596]: Primary Header is updated. Jan 23 00:57:20.637693 disk-uuid[596]: Secondary Entries is updated. Jan 23 00:57:20.637693 disk-uuid[596]: Secondary Header is updated. Jan 23 00:57:20.652934 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:57:20.655703 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:57:21.684854 disk-uuid[597]: The operation has completed successfully. Jan 23 00:57:21.687026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:57:21.762159 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:57:21.762358 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:57:21.815554 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:57:21.843289 sh[618]: Success Jan 23 00:57:21.866177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:57:21.866285 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:57:21.866316 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:57:21.879903 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 00:57:21.955133 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:57:21.960999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:57:21.977263 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:57:21.996925 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (630) Jan 23 00:57:21.999567 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 00:57:21.999617 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:57:22.034781 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 00:57:22.034872 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:57:22.034913 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:57:22.041094 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:57:22.042453 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:57:22.045370 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:57:22.047601 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:57:22.069254 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:57:22.099928 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (653) Jan 23 00:57:22.102043 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:57:22.103918 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:57:22.112464 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:57:22.112535 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:57:22.112561 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:57:22.118935 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:57:22.121133 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:57:22.128556 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:57:22.262632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:57:22.288127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:57:22.375964 ignition[717]: Ignition 2.22.0 Jan 23 00:57:22.375990 ignition[717]: Stage: fetch-offline Jan 23 00:57:22.376047 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:22.380905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:57:22.376061 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:22.387694 systemd-networkd[800]: lo: Link UP Jan 23 00:57:22.376208 ignition[717]: parsed url from cmdline: "" Jan 23 00:57:22.387700 systemd-networkd[800]: lo: Gained carrier Jan 23 00:57:22.376217 ignition[717]: no config URL provided Jan 23 00:57:22.389809 systemd-networkd[800]: Enumeration completed Jan 23 00:57:22.376226 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:57:22.390556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:57:22.376239 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:57:22.390765 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:57:22.376249 ignition[717]: failed to fetch config: resource requires networking Jan 23 00:57:22.390785 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:57:22.376491 ignition[717]: Ignition finished successfully Jan 23 00:57:22.391948 systemd-networkd[800]: eth0: Link UP Jan 23 00:57:22.392201 systemd-networkd[800]: eth0: Gained carrier Jan 23 00:57:22.392218 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:57:22.393287 systemd[1]: Reached target network.target - Network. Jan 23 00:57:22.443409 ignition[809]: Ignition 2.22.0 Jan 23 00:57:22.400113 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:57:22.443421 ignition[809]: Stage: fetch Jan 23 00:57:22.403977 systemd-networkd[800]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512.c.flatcar-212911.internal' to 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:57:22.443615 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:22.403997 systemd-networkd[800]: eth0: DHCPv4 address 10.128.0.101/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 23 00:57:22.443633 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:22.454346 unknown[809]: fetched base config from "system" Jan 23 00:57:22.443777 ignition[809]: parsed url from cmdline: "" Jan 23 00:57:22.454367 unknown[809]: fetched base config from "system" Jan 23 00:57:22.443784 ignition[809]: no config URL provided Jan 23 00:57:22.454379 unknown[809]: fetched user config from "gcp" Jan 23 00:57:22.443793 ignition[809]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:57:22.458082 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:57:22.443807 ignition[809]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:57:22.464817 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:57:22.443846 ignition[809]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 23 00:57:22.448896 ignition[809]: GET result: OK Jan 23 00:57:22.449020 ignition[809]: parsing config with SHA512: d35c63f07e4147a9453178628a177cf83df64f5407fa48fd8291821df2b670c9c2bd33c273a0aa181d1351b361ec90c6b98571974a1a4f964195db908307d88a Jan 23 00:57:22.455214 ignition[809]: fetch: fetch complete Jan 23 00:57:22.455225 ignition[809]: fetch: fetch passed Jan 23 00:57:22.455302 ignition[809]: Ignition finished successfully Jan 23 00:57:22.510821 ignition[817]: Ignition 2.22.0 Jan 23 00:57:22.510838 ignition[817]: Stage: kargs Jan 23 00:57:22.511124 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:22.514821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:57:22.511142 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:22.518259 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:57:22.512441 ignition[817]: kargs: kargs passed Jan 23 00:57:22.512499 ignition[817]: Ignition finished successfully Jan 23 00:57:22.552361 ignition[824]: Ignition 2.22.0 Jan 23 00:57:22.552389 ignition[824]: Stage: disks Jan 23 00:57:22.552602 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:22.556054 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:57:22.552619 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:22.558702 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:57:22.554269 ignition[824]: disks: disks passed Jan 23 00:57:22.564049 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:57:22.554344 ignition[824]: Ignition finished successfully Jan 23 00:57:22.568051 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:57:22.573041 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:57:22.577021 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:57:22.582517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:57:22.624593 systemd-fsck[833]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 00:57:22.639241 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:57:22.641816 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:57:22.806923 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 00:57:22.807679 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:57:22.810788 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:57:22.814966 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:57:22.829705 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:57:22.835482 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:57:22.835574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:57:22.835619 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:57:22.848983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:57:22.856362 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (841) Jan 23 00:57:22.856404 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:57:22.856428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:57:22.856066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:57:22.865245 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:57:22.865320 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:57:22.865346 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:57:22.870251 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:57:22.974457 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:57:22.984044 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:57:22.990568 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:57:22.996913 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:57:23.149610 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:57:23.156651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:57:23.169401 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:57:23.182391 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:57:23.188757 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:57:23.223490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:57:23.230867 ignition[953]: INFO : Ignition 2.22.0 Jan 23 00:57:23.230867 ignition[953]: INFO : Stage: mount Jan 23 00:57:23.235049 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:23.235049 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:23.235049 ignition[953]: INFO : mount: mount passed Jan 23 00:57:23.235049 ignition[953]: INFO : Ignition finished successfully Jan 23 00:57:23.234969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:57:23.237604 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:57:23.267320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:57:23.298946 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (965) Jan 23 00:57:23.301834 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:57:23.301917 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:57:23.307066 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:57:23.307140 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:57:23.307165 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:57:23.310546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:57:23.353523 ignition[981]: INFO : Ignition 2.22.0 Jan 23 00:57:23.353523 ignition[981]: INFO : Stage: files Jan 23 00:57:23.360043 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:23.360043 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:23.360043 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:57:23.360043 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:57:23.360043 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:57:23.376984 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:57:23.376984 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:57:23.376984 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:57:23.376984 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:57:23.376984 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 00:57:23.363668 unknown[981]: wrote ssh authorized keys file for user: core Jan 23 00:57:23.476711 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:57:23.599125 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:57:23.603059 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:57:23.635026 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:57:23.635026 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:57:23.635026 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:57:23.635026 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:57:23.635026 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:57:23.635026 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 00:57:24.077368 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 00:57:24.319053 systemd-networkd[800]: eth0: Gained IPv6LL Jan 23 00:57:24.988470 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:57:24.988470 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 00:57:24.996048 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:57:24.996048 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:57:24.996048 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 00:57:24.996048 ignition[981]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:57:24.996048 ignition[981]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:57:24.996048 ignition[981]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:57:24.996048 ignition[981]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:57:24.996048 ignition[981]: INFO : files: files passed Jan 23 00:57:24.996048 ignition[981]: INFO : Ignition finished successfully Jan 23 00:57:24.996514 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:57:25.003639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:57:25.013090 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:57:25.044081 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:57:25.044266 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:57:25.060059 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:57:25.060059 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:57:25.068996 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:57:25.060945 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:57:25.065534 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:57:25.075726 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:57:25.130061 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:57:25.130233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:57:25.134299 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:57:25.140010 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:57:25.144097 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:57:25.145423 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:57:25.186192 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:57:25.188439 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:57:25.215344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:57:25.215775 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:57:25.220680 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:57:25.224442 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:57:25.225234 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:57:25.232408 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:57:25.235415 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:57:25.239445 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:57:25.243430 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:57:25.247407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:57:25.251396 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:57:25.255457 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:57:25.259396 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:57:25.263443 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:57:25.268569 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:57:25.272622 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:57:25.276383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:57:25.276899 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:57:25.286276 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:57:25.292286 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:57:25.295360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:57:25.295649 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:57:25.299485 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:57:25.300112 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:57:25.312059 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:57:25.312581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:57:25.315487 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:57:25.315693 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:57:25.321787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:57:25.326579 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:57:25.335125 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:57:25.335543 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:57:25.347214 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:57:25.347414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:57:25.363652 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:57:25.370940 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:57:25.371236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:57:25.381493 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:57:25.382424 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:57:25.387200 ignition[1035]: INFO : Ignition 2.22.0 Jan 23 00:57:25.387200 ignition[1035]: INFO : Stage: umount Jan 23 00:57:25.387200 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:57:25.387200 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 23 00:57:25.387200 ignition[1035]: INFO : umount: umount passed Jan 23 00:57:25.387200 ignition[1035]: INFO : Ignition finished successfully Jan 23 00:57:25.391462 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:57:25.391587 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:57:25.407624 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:57:25.407742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:57:25.411114 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:57:25.411192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:57:25.418185 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:57:25.418373 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:57:25.421193 systemd[1]: Stopped target network.target - Network. Jan 23 00:57:25.425181 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:57:25.425246 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:57:25.429214 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:57:25.433194 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:57:25.438193 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:57:25.441183 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:57:25.443651 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:57:25.447261 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:57:25.447428 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:57:25.451360 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:57:25.451437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:57:25.455222 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:57:25.455408 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:57:25.459221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:57:25.459395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:57:25.463207 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:57:25.463388 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:57:25.467633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:57:25.471654 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:57:25.479229 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:57:25.479488 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:57:25.486565 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:57:25.486872 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:57:25.487074 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:57:25.494645 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:57:25.495908 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:57:25.499032 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:57:25.499100 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:57:25.504450 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:57:25.516972 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:57:25.517065 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:57:25.521069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:57:25.521147 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:57:25.525373 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:57:25.525457 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:57:25.531154 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:57:25.531211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:57:25.534453 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:57:25.542480 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:57:25.542604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:57:25.554706 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:57:25.555516 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:57:25.561474 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:57:25.561647 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:57:25.568066 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:57:25.568191 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:57:25.571043 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:57:25.571106 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:57:25.574241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:57:25.574429 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:57:25.585990 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:57:25.586082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:57:25.593062 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:57:25.593189 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:57:25.602258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:57:25.613984 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:57:25.614096 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:57:25.618772 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:57:25.618858 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:57:25.629496 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:57:25.629731 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:57:25.635225 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:57:25.635302 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:57:25.638212 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:57:25.638379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:57:25.644860 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:57:25.644968 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:57:25.645033 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:57:25.645106 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:57:25.645745 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:57:25.645918 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:57:25.716020 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 23 00:57:25.649536 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:57:25.656306 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:57:25.681458 systemd[1]: Switching root. Jan 23 00:57:25.723011 systemd-journald[192]: Journal stopped Jan 23 00:57:27.720762 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:57:27.720823 kernel: SELinux: policy capability open_perms=1 Jan 23 00:57:27.720852 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:57:27.720869 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:57:27.720905 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:57:27.720925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:57:27.720946 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:57:27.720964 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:57:27.720989 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:57:27.721010 kernel: audit: type=1403 audit(1769129846.292:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:57:27.721035 systemd[1]: Successfully loaded SELinux policy in 69.399ms. Jan 23 00:57:27.721061 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.825ms. Jan 23 00:57:27.721085 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:57:27.721107 systemd[1]: Detected virtualization google. Jan 23 00:57:27.721135 systemd[1]: Detected architecture x86-64. Jan 23 00:57:27.721156 systemd[1]: Detected first boot. Jan 23 00:57:27.721180 systemd[1]: Initializing machine ID from random generator. Jan 23 00:57:27.721203 zram_generator::config[1079]: No configuration found. Jan 23 00:57:27.721229 kernel: Guest personality initialized and is inactive Jan 23 00:57:27.721251 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 00:57:27.721277 kernel: Initialized host personality Jan 23 00:57:27.721298 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:57:27.721321 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:57:27.721345 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:57:27.721367 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:57:27.721391 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:57:27.721412 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:57:27.721440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:57:27.721464 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:57:27.721488 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:57:27.721511 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:57:27.721536 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:57:27.721559 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:57:27.721583 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:57:27.721610 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:57:27.721635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:57:27.721659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:57:27.721681 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:57:27.721713 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:57:27.721737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:57:27.721768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:57:27.721794 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:57:27.721818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:57:27.721846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:57:27.721870 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:57:27.723049 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:57:27.723079 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:57:27.723102 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:57:27.723125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:57:27.723150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:57:27.725409 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:57:27.725445 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:57:27.725472 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:57:27.725497 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:57:27.725522 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:57:27.725548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:57:27.725579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:57:27.725604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:57:27.725628 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:57:27.725652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:57:27.725677 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:57:27.725710 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:57:27.725735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:57:27.725765 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:57:27.725790 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:57:27.725814 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:57:27.725840 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:57:27.725864 systemd[1]: Reached target machines.target - Containers. Jan 23 00:57:27.727300 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:57:27.727335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:57:27.727361 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:57:27.727393 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:57:27.727417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:57:27.727441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:57:27.727466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:57:27.727489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:57:27.727514 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:57:27.727539 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:57:27.727566 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:57:27.727591 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:57:27.727621 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:57:27.727646 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:57:27.727671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:57:27.727716 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:57:27.727740 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:57:27.727766 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:57:27.727790 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:57:27.727815 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:57:27.727845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:57:27.727870 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:57:27.727911 systemd[1]: Stopped verity-setup.service. Jan 23 00:57:27.727936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:57:27.727961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:57:27.727985 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:57:27.728009 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:57:27.728034 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:57:27.728063 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:57:27.728088 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:57:27.728113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:57:27.728138 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:57:27.728162 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:57:27.728187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:57:27.728211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:57:27.728235 kernel: fuse: init (API version 7.41) Jan 23 00:57:27.728258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:57:27.728287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:57:27.728311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:57:27.728336 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:57:27.728360 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:57:27.728385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:57:27.728455 systemd-journald[1153]: Collecting audit messages is disabled. Jan 23 00:57:27.728511 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:57:27.728535 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:57:27.728559 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:57:27.728583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:57:27.728607 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:57:27.728632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:57:27.728662 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:57:27.728709 systemd-journald[1153]: Journal started Jan 23 00:57:27.728760 systemd-journald[1153]: Runtime Journal (/run/log/journal/ab37abb76f604b388f4a8c40b7080991) is 8M, max 148.6M, 140.6M free. Jan 23 00:57:27.744941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:57:27.745018 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:57:27.149343 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:57:27.174808 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 00:57:27.175353 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:57:27.752060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:57:27.764453 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:57:27.763971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:57:27.769977 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:57:27.773353 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:57:27.802698 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:57:27.813846 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:57:27.891914 kernel: ACPI: bus type drm_connector registered Jan 23 00:57:27.893733 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:57:27.897939 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:57:27.898254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:57:27.906210 systemd-journald[1153]: Time spent on flushing to /var/log/journal/ab37abb76f604b388f4a8c40b7080991 is 159.032ms for 955 entries. Jan 23 00:57:27.906210 systemd-journald[1153]: System Journal (/var/log/journal/ab37abb76f604b388f4a8c40b7080991) is 8M, max 584.8M, 576.8M free. Jan 23 00:57:28.100833 systemd-journald[1153]: Received client request to flush runtime journal. Jan 23 00:57:28.100956 kernel: loop: module loaded Jan 23 00:57:28.101000 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 00:57:28.101032 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:57:28.101063 kernel: loop1: detected capacity change from 0 to 50736 Jan 23 00:57:27.902546 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:57:27.904072 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:57:27.907939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:57:27.916561 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:57:27.920592 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:57:27.929566 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:57:27.934486 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:57:27.934770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:57:27.938552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:57:28.025466 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 23 00:57:28.025520 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 23 00:57:28.047954 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:57:28.057123 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:57:28.060907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:57:28.068180 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:57:28.109452 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:57:28.117462 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 00:57:28.162555 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:57:28.169786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:57:28.179531 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:57:28.184857 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:57:28.188959 kernel: loop3: detected capacity change from 0 to 219144 Jan 23 00:57:28.224820 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:57:28.283218 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Jan 23 00:57:28.285947 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Jan 23 00:57:28.299184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:57:28.327919 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 00:57:28.370312 kernel: loop5: detected capacity change from 0 to 50736 Jan 23 00:57:28.402295 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 00:57:28.439367 kernel: loop7: detected capacity change from 0 to 219144 Jan 23 00:57:28.482336 (sd-merge)[1233]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 23 00:57:28.485310 (sd-merge)[1233]: Merged extensions into '/usr'. Jan 23 00:57:28.501024 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:57:28.501243 systemd[1]: Reloading... Jan 23 00:57:28.729916 zram_generator::config[1257]: No configuration found. Jan 23 00:57:28.959976 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:57:29.205324 systemd[1]: Reloading finished in 702 ms. Jan 23 00:57:29.223739 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:57:29.227430 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:57:29.229408 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:57:29.244255 systemd[1]: Starting ensure-sysext.service... Jan 23 00:57:29.251050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:57:29.259466 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:57:29.284099 systemd[1]: Reload requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:57:29.284294 systemd[1]: Reloading... Jan 23 00:57:29.306370 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:57:29.309184 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:57:29.309863 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:57:29.310790 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:57:29.316500 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:57:29.321322 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 23 00:57:29.321463 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 23 00:57:29.322507 systemd-udevd[1302]: Using default interface naming scheme 'v255'. Jan 23 00:57:29.333826 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:57:29.334009 systemd-tmpfiles[1301]: Skipping /boot Jan 23 00:57:29.360852 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:57:29.363585 systemd-tmpfiles[1301]: Skipping /boot Jan 23 00:57:29.379918 zram_generator::config[1326]: No configuration found. Jan 23 00:57:29.961928 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 00:57:29.966907 kernel: ACPI: button: Power Button [PWRF] Jan 23 00:57:29.966999 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 23 00:57:29.968908 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 00:57:30.007990 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 00:57:30.082133 kernel: EDAC MC: Ver: 3.0.0 Jan 23 00:57:30.123867 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:57:30.124348 systemd[1]: Reloading finished in 838 ms. Jan 23 00:57:30.157767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:57:30.174952 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:57:30.184614 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:57:30.228897 systemd[1]: Finished ensure-sysext.service. Jan 23 00:57:30.259724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 23 00:57:30.267353 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jan 23 00:57:30.270064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:57:30.271664 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:57:30.277772 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:57:30.279709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:57:30.285077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:57:30.289099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:57:30.295070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:57:30.300372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:57:30.306599 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 00:57:30.309205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:57:30.312162 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:57:30.313830 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:57:30.323844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:57:30.332545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:57:30.341682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:57:30.345032 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:57:30.351295 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:57:30.355268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:57:30.358021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:57:30.361357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:57:30.361688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:57:30.363893 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:57:30.364188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:57:30.367522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:57:30.367837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:57:30.371420 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:57:30.371721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:57:30.390427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:57:30.390597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:57:30.396206 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:57:30.416748 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:57:30.430971 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 00:57:30.443570 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 23 00:57:30.446481 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:57:30.488005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:57:30.493569 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:57:30.517969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:57:30.519677 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:57:30.555872 augenrules[1473]: No rules Jan 23 00:57:30.556870 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:57:30.560727 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:57:30.561581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:57:30.562781 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 23 00:57:30.583298 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:57:30.609619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:57:30.698662 systemd-networkd[1427]: lo: Link UP Jan 23 00:57:30.699191 systemd-networkd[1427]: lo: Gained carrier Jan 23 00:57:30.702636 systemd-networkd[1427]: Enumeration completed Jan 23 00:57:30.703026 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:57:30.703565 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:57:30.703581 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:57:30.704254 systemd-networkd[1427]: eth0: Link UP Jan 23 00:57:30.704528 systemd-networkd[1427]: eth0: Gained carrier Jan 23 00:57:30.704565 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:57:30.708146 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:57:30.713047 systemd-resolved[1431]: Positive Trust Anchors: Jan 23 00:57:30.713466 systemd-resolved[1431]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:57:30.713622 systemd-resolved[1431]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:57:30.715148 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:57:30.716114 systemd-networkd[1427]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512.c.flatcar-212911.internal' to 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:57:30.716139 systemd-networkd[1427]: eth0: DHCPv4 address 10.128.0.101/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 23 00:57:30.725492 systemd-resolved[1431]: Defaulting to hostname 'linux'. Jan 23 00:57:30.730614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:57:30.734086 systemd[1]: Reached target network.target - Network. Jan 23 00:57:30.737330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:57:30.740009 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:57:30.743122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:57:30.746096 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:57:30.749030 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 00:57:30.751283 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:57:30.754194 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:57:30.757010 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:57:30.758703 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:57:30.758739 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:57:30.761009 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:57:30.765466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:57:30.770606 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:57:30.775840 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:57:30.779152 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:57:30.781975 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:57:30.803848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:57:30.807508 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:57:30.812351 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:57:30.816287 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:57:30.820697 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:57:30.822391 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:57:30.825098 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:57:30.825145 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:57:30.826803 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:57:30.835081 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:57:30.843157 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:57:30.863116 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:57:30.876630 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:57:30.883196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:57:30.886021 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:57:30.890114 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 00:57:30.897643 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:57:30.900172 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:57:30.907414 jq[1502]: false Jan 23 00:57:30.913932 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:57:30.924192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:57:30.930546 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Refreshing passwd entry cache Jan 23 00:57:30.929861 oslogin_cache_refresh[1506]: Refreshing passwd entry cache Jan 23 00:57:30.932946 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:57:30.942489 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Failure getting users, quitting Jan 23 00:57:30.942489 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:57:30.942489 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Refreshing group entry cache Jan 23 00:57:30.942489 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Failure getting groups, quitting Jan 23 00:57:30.942489 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:57:30.936966 oslogin_cache_refresh[1506]: Failure getting users, quitting Jan 23 00:57:30.936994 oslogin_cache_refresh[1506]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:57:30.937060 oslogin_cache_refresh[1506]: Refreshing group entry cache Jan 23 00:57:30.938544 oslogin_cache_refresh[1506]: Failure getting groups, quitting Jan 23 00:57:30.938566 oslogin_cache_refresh[1506]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:57:30.943209 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:57:30.948515 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 23 00:57:30.951244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:57:30.956562 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:57:30.967093 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:57:30.980673 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:57:30.981016 coreos-metadata[1499]: Jan 23 00:57:30.980 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.982 INFO Fetch successful Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.982 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.982 INFO Fetch successful Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.982 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.983 INFO Fetch successful Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.983 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 23 00:57:30.988066 coreos-metadata[1499]: Jan 23 00:57:30.984 INFO Fetch successful Jan 23 00:57:30.984741 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:57:30.986152 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:57:30.986677 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 00:57:30.987966 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 00:57:30.999368 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:57:30.999608 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:57:31.008958 jq[1514]: true Jan 23 00:57:31.028378 extend-filesystems[1505]: Found /dev/sda6 Jan 23 00:57:31.076931 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:57:31.077437 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:57:31.086907 extend-filesystems[1505]: Found /dev/sda9 Jan 23 00:57:31.090788 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:57:31.094386 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:57:31.102940 extend-filesystems[1505]: Checking size of /dev/sda9 Jan 23 00:57:31.119853 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:57:31.122926 ntpd[1508]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: ---------------------------------------------------- Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: corporation. Support and training for ntp-4 are Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: available at https://www.nwtime.org/support Jan 23 00:57:31.138800 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: ---------------------------------------------------- Jan 23 00:57:31.123027 ntpd[1508]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:57:31.139870 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: proto: precision = 0.080 usec (-23) Jan 23 00:57:31.123046 ntpd[1508]: ---------------------------------------------------- Jan 23 00:57:31.123064 ntpd[1508]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:57:31.123081 ntpd[1508]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:57:31.123097 ntpd[1508]: corporation. Support and training for ntp-4 are Jan 23 00:57:31.123114 ntpd[1508]: available at https://www.nwtime.org/support Jan 23 00:57:31.123131 ntpd[1508]: ---------------------------------------------------- Jan 23 00:57:31.138927 ntpd[1508]: proto: precision = 0.080 usec (-23) Jan 23 00:57:31.141602 ntpd[1508]: basedate set to 2026-01-10 Jan 23 00:57:31.144117 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: basedate set to 2026-01-10 Jan 23 00:57:31.144117 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: gps base set to 2026-01-11 (week 2401) Jan 23 00:57:31.144117 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:57:31.144117 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:57:31.141636 ntpd[1508]: gps base set to 2026-01-11 (week 2401) Jan 23 00:57:31.141826 ntpd[1508]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:57:31.141873 ntpd[1508]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:57:31.142182 ntpd[1508]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:57:31.145469 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:57:31.145469 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Listen normally on 3 eth0 10.128.0.101:123 Jan 23 00:57:31.145469 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: Listen normally on 4 lo [::1]:123 Jan 23 00:57:31.145469 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: bind(21) AF_INET6 [fe80::4001:aff:fe80:65%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:57:31.145469 ntpd[1508]: 23 Jan 00:57:31 ntpd[1508]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:65%2]:123 Jan 23 00:57:31.144994 ntpd[1508]: Listen normally on 3 eth0 10.128.0.101:123 Jan 23 00:57:31.145053 ntpd[1508]: Listen normally on 4 lo [::1]:123 Jan 23 00:57:31.147714 kernel: ntpd[1508]: segfault at 24 ip 0000559ee0dbbaeb sp 00007ffdd0b09540 error 4 in ntpd[68aeb,559ee0d59000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 00:57:31.147770 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 00:57:31.145103 ntpd[1508]: bind(21) AF_INET6 [fe80::4001:aff:fe80:65%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:57:31.145140 ntpd[1508]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:65%2]:123 Jan 23 00:57:31.150339 update_engine[1513]: I20260123 00:57:31.149299 1513 main.cc:92] Flatcar Update Engine starting Jan 23 00:57:31.161644 jq[1530]: true Jan 23 00:57:31.191517 systemd-coredump[1556]: Process 1508 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 00:57:31.193353 tar[1520]: linux-amd64/LICENSE Jan 23 00:57:31.193353 tar[1520]: linux-amd64/helm Jan 23 00:57:31.200537 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 00:57:31.207838 extend-filesystems[1505]: Resized partition /dev/sda9 Jan 23 00:57:31.207222 systemd[1]: Started systemd-coredump@0-1556-0.service - Process Core Dump (PID 1556/UID 0). Jan 23 00:57:31.220961 extend-filesystems[1559]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:57:31.242908 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 23 00:57:31.372739 systemd-logind[1512]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 00:57:31.372790 systemd-logind[1512]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 00:57:31.372826 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 00:57:31.373657 systemd-logind[1512]: New seat seat0. Jan 23 00:57:31.376872 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:57:31.407647 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 23 00:57:31.414960 extend-filesystems[1559]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 00:57:31.414960 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 00:57:31.414960 extend-filesystems[1559]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 23 00:57:31.424612 extend-filesystems[1505]: Resized filesystem in /dev/sda9 Jan 23 00:57:31.419619 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:57:31.431736 bash[1574]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:57:31.420765 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:57:31.429057 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:57:31.473373 dbus-daemon[1500]: [system] SELinux support is enabled Jan 23 00:57:31.475130 systemd[1]: Starting sshkeys.service... Jan 23 00:57:31.478136 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:57:31.495326 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:57:31.495371 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:57:31.499564 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:57:31.499602 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:57:31.505346 update_engine[1513]: I20260123 00:57:31.505103 1513 update_check_scheduler.cc:74] Next update check in 6m21s Jan 23 00:57:31.513957 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 00:57:31.514119 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:57:31.519350 dbus-daemon[1500]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 00:57:31.521198 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:57:31.541722 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 00:57:31.554732 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:57:31.561943 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:57:31.796074 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 00:57:31.797264 coreos-metadata[1581]: Jan 23 00:57:31.796 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 23 00:57:31.798695 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 00:57:31.802517 coreos-metadata[1581]: Jan 23 00:57:31.801 INFO Fetch failed with 404: resource not found Jan 23 00:57:31.802517 coreos-metadata[1581]: Jan 23 00:57:31.801 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 23 00:57:31.803556 coreos-metadata[1581]: Jan 23 00:57:31.802 INFO Fetch successful Jan 23 00:57:31.803556 coreos-metadata[1581]: Jan 23 00:57:31.802 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 23 00:57:31.809669 dbus-daemon[1500]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1580 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 00:57:31.822653 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 00:57:31.824786 coreos-metadata[1581]: Jan 23 00:57:31.824 INFO Fetch failed with 404: resource not found Jan 23 00:57:31.824786 coreos-metadata[1581]: Jan 23 00:57:31.824 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 23 00:57:31.824786 coreos-metadata[1581]: Jan 23 00:57:31.824 INFO Fetch failed with 404: resource not found Jan 23 00:57:31.824786 coreos-metadata[1581]: Jan 23 00:57:31.824 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 23 00:57:31.824786 coreos-metadata[1581]: Jan 23 00:57:31.824 INFO Fetch successful Jan 23 00:57:31.834387 unknown[1581]: wrote ssh authorized keys file for user: core Jan 23 00:57:31.896780 systemd-coredump[1558]: Process 1508 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1508: #0 0x0000559ee0dbbaeb n/a (ntpd + 0x68aeb) #1 0x0000559ee0d64cdf n/a (ntpd + 0x11cdf) #2 0x0000559ee0d65575 n/a (ntpd + 0x12575) #3 0x0000559ee0d60d8a n/a (ntpd + 0xdd8a) #4 0x0000559ee0d625d3 n/a (ntpd + 0xf5d3) #5 0x0000559ee0d6afd1 n/a (ntpd + 0x17fd1) #6 0x0000559ee0d5bc2d n/a (ntpd + 0x8c2d) #7 0x00007fa23712e16c n/a (libc.so.6 + 0x2716c) #8 0x00007fa23712e229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000559ee0d5bc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 00:57:31.905033 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 00:57:31.905279 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 00:57:31.922576 systemd[1]: systemd-coredump@0-1556-0.service: Deactivated successfully. Jan 23 00:57:31.960627 update-ssh-keys[1591]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:57:31.963615 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:57:31.977375 systemd[1]: Finished sshkeys.service. Jan 23 00:57:32.130123 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 00:57:32.135211 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:57:32.163411 polkitd[1589]: Started polkitd version 126 Jan 23 00:57:32.180551 polkitd[1589]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 00:57:32.186173 polkitd[1589]: Loading rules from directory /run/polkit-1/rules.d Jan 23 00:57:32.186408 polkitd[1589]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:57:32.186774 containerd[1536]: time="2026-01-23T00:57:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:57:32.187245 ntpd[1607]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 00:57:32.187345 ntpd[1607]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: ---------------------------------------------------- Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: corporation. Support and training for ntp-4 are Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: available at https://www.nwtime.org/support Jan 23 00:57:32.187685 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: ---------------------------------------------------- Jan 23 00:57:32.191981 kernel: ntpd[1607]: segfault at 24 ip 0000557734148aeb sp 00007ffe88e782b0 error 4 in ntpd[68aeb,5577340e6000+80000] likely on CPU 1 (core 0, socket 0) Jan 23 00:57:32.192038 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 00:57:32.187361 ntpd[1607]: ---------------------------------------------------- Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: proto: precision = 0.097 usec (-23) Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: basedate set to 2026-01-10 Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: gps base set to 2026-01-11 (week 2401) Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Listen normally on 3 eth0 10.128.0.101:123 Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: Listen normally on 4 lo [::1]:123 Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: bind(21) AF_INET6 [fe80::4001:aff:fe80:65%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:57:32.192330 ntpd[1607]: 23 Jan 00:57:32 ntpd[1607]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:65%2]:123 Jan 23 00:57:32.187377 ntpd[1607]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:57:32.187403 ntpd[1607]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:57:32.187417 ntpd[1607]: corporation. Support and training for ntp-4 are Jan 23 00:57:32.187431 ntpd[1607]: available at https://www.nwtime.org/support Jan 23 00:57:32.196942 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 00:57:32.187445 ntpd[1607]: ---------------------------------------------------- Jan 23 00:57:32.188449 ntpd[1607]: proto: precision = 0.097 usec (-23) Jan 23 00:57:32.188767 ntpd[1607]: basedate set to 2026-01-10 Jan 23 00:57:32.199702 containerd[1536]: time="2026-01-23T00:57:32.197539653Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:57:32.188784 ntpd[1607]: gps base set to 2026-01-11 (week 2401) Jan 23 00:57:32.188862 polkitd[1589]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 00:57:32.188922 ntpd[1607]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:57:32.201666 systemd-coredump[1615]: Process 1607 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 00:57:32.188965 ntpd[1607]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:57:32.189195 ntpd[1607]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:57:32.189236 ntpd[1607]: Listen normally on 3 eth0 10.128.0.101:123 Jan 23 00:57:32.189278 ntpd[1607]: Listen normally on 4 lo [::1]:123 Jan 23 00:57:32.189319 ntpd[1607]: bind(21) AF_INET6 [fe80::4001:aff:fe80:65%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:57:32.189347 ntpd[1607]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:65%2]:123 Jan 23 00:57:32.191367 polkitd[1589]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:57:32.191435 polkitd[1589]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 00:57:32.196564 polkitd[1589]: Finished loading, compiling and executing 2 rules Jan 23 00:57:32.212666 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 00:57:32.213689 polkitd[1589]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 00:57:32.214200 systemd[1]: Started systemd-coredump@1-1615-0.service - Process Core Dump (PID 1615/UID 0). Jan 23 00:57:32.273099 systemd-resolved[1431]: System hostname changed to 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512'. Jan 23 00:57:32.273289 systemd-hostnamed[1580]: Hostname set to (transient) Jan 23 00:57:32.279243 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282279261Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.559µs" Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282331293Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282362897Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282578967Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282604008Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282652133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282737661Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.282755879Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.285160394Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.285192276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.285215141Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:57:32.286905 containerd[1536]: time="2026-01-23T00:57:32.285236521Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:57:32.287575 containerd[1536]: time="2026-01-23T00:57:32.285381524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:57:32.287575 containerd[1536]: time="2026-01-23T00:57:32.285677250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:57:32.287575 containerd[1536]: time="2026-01-23T00:57:32.285729385Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:57:32.287575 containerd[1536]: time="2026-01-23T00:57:32.285748523Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:57:32.287575 containerd[1536]: time="2026-01-23T00:57:32.285786687Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:57:32.292823 containerd[1536]: time="2026-01-23T00:57:32.290325675Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:57:32.292823 containerd[1536]: time="2026-01-23T00:57:32.290447338Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:57:32.300144 containerd[1536]: time="2026-01-23T00:57:32.300088527Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:57:32.300249 containerd[1536]: time="2026-01-23T00:57:32.300167957Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:57:32.300249 containerd[1536]: time="2026-01-23T00:57:32.300193243Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:57:32.300249 containerd[1536]: time="2026-01-23T00:57:32.300212160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:57:32.300249 containerd[1536]: time="2026-01-23T00:57:32.300232156Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300248416Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300271261Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300291624Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300309256Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300326888Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300342807Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:57:32.300434 containerd[1536]: time="2026-01-23T00:57:32.300364386Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300532499Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300562897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300587568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300611183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300629937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300647392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300685193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:57:32.300719 containerd[1536]: time="2026-01-23T00:57:32.300705061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:57:32.301301 containerd[1536]: time="2026-01-23T00:57:32.300723756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:57:32.301301 containerd[1536]: time="2026-01-23T00:57:32.300741505Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:57:32.301301 containerd[1536]: time="2026-01-23T00:57:32.300771789Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:57:32.301301 containerd[1536]: time="2026-01-23T00:57:32.300839365Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:57:32.301301 containerd[1536]: time="2026-01-23T00:57:32.300861426Z" level=info msg="Start snapshots syncer" Jan 23 00:57:32.301301 containerd[1536]: time="2026-01-23T00:57:32.300920821Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:57:32.301560 containerd[1536]: time="2026-01-23T00:57:32.301353254Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:57:32.301560 containerd[1536]: time="2026-01-23T00:57:32.301446792Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.310764689Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311039404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311087633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311111790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311140964Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311162424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311180645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311198834Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311245912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311273819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311295900Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311345302Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311368434Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:57:32.312314 containerd[1536]: time="2026-01-23T00:57:32.311383325Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311398803Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311413443Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311430577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311460001Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311487220Z" level=info msg="runtime interface created" Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311497282Z" level=info msg="created NRI interface" Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311514122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311540381Z" level=info msg="Connect containerd service" Jan 23 00:57:32.313139 containerd[1536]: time="2026-01-23T00:57:32.311573712Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:57:32.319787 containerd[1536]: time="2026-01-23T00:57:32.318351746Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:57:32.439370 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:57:32.541176 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:57:32.549281 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:57:32.564112 systemd-coredump[1618]: Process 1607 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1607: #0 0x0000557734148aeb n/a (ntpd + 0x68aeb) #1 0x00005577340f1cdf n/a (ntpd + 0x11cdf) #2 0x00005577340f2575 n/a (ntpd + 0x12575) #3 0x00005577340edd8a n/a (ntpd + 0xdd8a) #4 0x00005577340ef5d3 n/a (ntpd + 0xf5d3) #5 0x00005577340f7fd1 n/a (ntpd + 0x17fd1) #6 0x00005577340e8c2d n/a (ntpd + 0x8c2d) #7 0x00007ff9f293b16c n/a (libc.so.6 + 0x2716c) #8 0x00007ff9f293b229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005577340e8c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 00:57:32.567634 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 00:57:32.567980 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 00:57:32.575045 systemd-networkd[1427]: eth0: Gained IPv6LL Jan 23 00:57:32.581549 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584117915Z" level=info msg="Start subscribing containerd event" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584185180Z" level=info msg="Start recovering state" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584310821Z" level=info msg="Start event monitor" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584330188Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584341579Z" level=info msg="Start streaming server" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584355841Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584367053Z" level=info msg="runtime interface starting up..." Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584376653Z" level=info msg="starting plugins..." Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584395099Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584663158Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:57:32.585032 containerd[1536]: time="2026-01-23T00:57:32.584731032Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:57:32.586620 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:57:32.589316 containerd[1536]: time="2026-01-23T00:57:32.586753469Z" level=info msg="containerd successfully booted in 0.403247s" Jan 23 00:57:32.590815 systemd[1]: systemd-coredump@1-1615-0.service: Deactivated successfully. Jan 23 00:57:32.602625 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:57:32.603364 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:57:32.613809 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:57:32.624196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:32.631428 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:57:32.638200 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 23 00:57:32.646307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:57:32.673094 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 23 00:57:32.679014 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:57:32.695431 init.sh[1657]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 23 00:57:32.698629 init.sh[1657]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 23 00:57:32.700824 init.sh[1657]: + /usr/bin/google_instance_setup Jan 23 00:57:32.718713 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:57:32.726787 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:57:32.731604 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:57:32.735301 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:57:32.752303 ntpd[1660]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 00:57:32.752402 ntpd[1660]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: ---------------------------------------------------- Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: corporation. Support and training for ntp-4 are Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: available at https://www.nwtime.org/support Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: ---------------------------------------------------- Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: proto: precision = 0.076 usec (-24) Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: basedate set to 2026-01-10 Jan 23 00:57:32.754104 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: gps base set to 2026-01-11 (week 2401) Jan 23 00:57:32.752418 ntpd[1660]: ---------------------------------------------------- Jan 23 00:57:32.752431 ntpd[1660]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:57:32.752444 ntpd[1660]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:57:32.752457 ntpd[1660]: corporation. Support and training for ntp-4 are Jan 23 00:57:32.752471 ntpd[1660]: available at https://www.nwtime.org/support Jan 23 00:57:32.752484 ntpd[1660]: ---------------------------------------------------- Jan 23 00:57:32.753371 ntpd[1660]: proto: precision = 0.076 usec (-24) Jan 23 00:57:32.753671 ntpd[1660]: basedate set to 2026-01-10 Jan 23 00:57:32.753689 ntpd[1660]: gps base set to 2026-01-11 (week 2401) Jan 23 00:57:32.757978 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listen normally on 3 eth0 10.128.0.101:123 Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listen normally on 4 lo [::1]:123 Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:65%2]:123 Jan 23 00:57:32.761185 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: Listening on routing socket on fd #22 for interface updates Jan 23 00:57:32.758929 ntpd[1660]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:57:32.758980 ntpd[1660]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:57:32.759589 ntpd[1660]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:57:32.759632 ntpd[1660]: Listen normally on 3 eth0 10.128.0.101:123 Jan 23 00:57:32.759674 ntpd[1660]: Listen normally on 4 lo [::1]:123 Jan 23 00:57:32.759712 ntpd[1660]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:65%2]:123 Jan 23 00:57:32.760445 ntpd[1660]: Listening on routing socket on fd #22 for interface updates Jan 23 00:57:32.769239 ntpd[1660]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:57:32.772024 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:57:32.772024 ntpd[1660]: 23 Jan 00:57:32 ntpd[1660]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:57:32.769287 ntpd[1660]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:57:32.879507 tar[1520]: linux-amd64/README.md Jan 23 00:57:32.905726 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:57:33.035287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:57:33.044188 systemd[1]: Started sshd@0-10.128.0.101:22-4.153.228.146:39180.service - OpenSSH per-connection server daemon (4.153.228.146:39180). Jan 23 00:57:33.364400 sshd[1679]: Accepted publickey for core from 4.153.228.146 port 39180 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:33.369222 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:33.381661 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:57:33.389961 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:57:33.420206 systemd-logind[1512]: New session 1 of user core. Jan 23 00:57:33.423397 instance-setup[1665]: INFO Running google_set_multiqueue. Jan 23 00:57:33.443174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:57:33.455791 instance-setup[1665]: INFO Set channels for eth0 to 2. Jan 23 00:57:33.460271 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:57:33.472161 instance-setup[1665]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 23 00:57:33.479774 instance-setup[1665]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 23 00:57:33.479914 instance-setup[1665]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 23 00:57:33.483673 instance-setup[1665]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 23 00:57:33.485906 instance-setup[1665]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 23 00:57:33.486194 instance-setup[1665]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 23 00:57:33.486669 instance-setup[1665]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 23 00:57:33.489959 instance-setup[1665]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 23 00:57:33.495716 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:57:33.501104 instance-setup[1665]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 23 00:57:33.502753 systemd-logind[1512]: New session c1 of user core. Jan 23 00:57:33.512037 instance-setup[1665]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 23 00:57:33.516074 instance-setup[1665]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 23 00:57:33.518019 instance-setup[1665]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 23 00:57:33.547296 init.sh[1657]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 23 00:57:33.771860 startup-script[1718]: INFO Starting startup scripts. Jan 23 00:57:33.783902 startup-script[1718]: INFO No startup scripts found in metadata. Jan 23 00:57:33.783991 startup-script[1718]: INFO Finished running startup scripts. Jan 23 00:57:33.811466 init.sh[1657]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 23 00:57:33.812075 init.sh[1657]: + daemon_pids=() Jan 23 00:57:33.812233 init.sh[1657]: + for d in accounts clock_skew network Jan 23 00:57:33.812591 init.sh[1657]: + daemon_pids+=($!) Jan 23 00:57:33.812907 init.sh[1723]: + /usr/bin/google_accounts_daemon Jan 23 00:57:33.813315 init.sh[1657]: + for d in accounts clock_skew network Jan 23 00:57:33.813652 init.sh[1657]: + daemon_pids+=($!) Jan 23 00:57:33.814177 init.sh[1724]: + /usr/bin/google_clock_skew_daemon Jan 23 00:57:33.815022 init.sh[1657]: + for d in accounts clock_skew network Jan 23 00:57:33.815370 init.sh[1657]: + daemon_pids+=($!) Jan 23 00:57:33.816129 init.sh[1725]: + /usr/bin/google_network_daemon Jan 23 00:57:33.819184 init.sh[1657]: + NOTIFY_SOCKET=/run/systemd/notify Jan 23 00:57:33.819184 init.sh[1657]: + /usr/bin/systemd-notify --ready Jan 23 00:57:33.837352 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 23 00:57:33.848717 init.sh[1657]: + wait -n 1723 1724 1725 Jan 23 00:57:33.891571 systemd[1701]: Queued start job for default target default.target. Jan 23 00:57:33.899318 systemd[1701]: Created slice app.slice - User Application Slice. Jan 23 00:57:33.899379 systemd[1701]: Reached target paths.target - Paths. Jan 23 00:57:33.899466 systemd[1701]: Reached target timers.target - Timers. Jan 23 00:57:33.904187 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:57:33.946818 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:57:33.947095 systemd[1701]: Reached target sockets.target - Sockets. Jan 23 00:57:33.947315 systemd[1701]: Reached target basic.target - Basic System. Jan 23 00:57:33.947466 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:57:33.947706 systemd[1701]: Reached target default.target - Main User Target. Jan 23 00:57:33.948018 systemd[1701]: Startup finished in 431ms. Jan 23 00:57:33.965198 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:57:34.173976 systemd[1]: Started sshd@1-10.128.0.101:22-4.153.228.146:39194.service - OpenSSH per-connection server daemon (4.153.228.146:39194). Jan 23 00:57:34.362960 google-clock-skew[1724]: INFO Starting Google Clock Skew daemon. Jan 23 00:57:34.376989 google-clock-skew[1724]: INFO Clock drift token has changed: 0. Jan 23 00:57:34.382855 google-networking[1725]: INFO Starting Google Networking daemon. Jan 23 00:57:34.396624 groupadd[1741]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 23 00:57:34.402962 groupadd[1741]: group added to /etc/gshadow: name=google-sudoers Jan 23 00:57:34.463739 groupadd[1741]: new group: name=google-sudoers, GID=1000 Jan 23 00:57:34.486666 sshd[1738]: Accepted publickey for core from 4.153.228.146 port 39194 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:34.487167 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:34.501981 systemd-logind[1512]: New session 2 of user core. Jan 23 00:57:34.503489 google-accounts[1723]: INFO Starting Google Accounts daemon. Jan 23 00:57:34.506137 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:57:34.517321 google-accounts[1723]: WARNING OS Login not installed. Jan 23 00:57:34.521623 google-accounts[1723]: INFO Creating a new user account for 0. Jan 23 00:57:34.530808 init.sh[1751]: useradd: invalid user name '0': use --badname to ignore Jan 23 00:57:34.531151 google-accounts[1723]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 23 00:57:34.668495 sshd[1752]: Connection closed by 4.153.228.146 port 39194 Jan 23 00:57:34.669006 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:34.676204 systemd[1]: sshd@1-10.128.0.101:22-4.153.228.146:39194.service: Deactivated successfully. Jan 23 00:57:34.680447 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:57:34.685112 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:57:34.687449 systemd-logind[1512]: Removed session 2. Jan 23 00:57:34.697617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:34.717627 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:34.722409 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:57:34.733718 systemd[1]: Started sshd@2-10.128.0.101:22-4.153.228.146:51522.service - OpenSSH per-connection server daemon (4.153.228.146:51522). Jan 23 00:57:34.745374 systemd[1]: Startup finished in 3.632s (kernel) + 7.499s (initrd) + 8.519s (userspace) = 19.651s. Jan 23 00:57:35.000711 systemd-resolved[1431]: Clock change detected. Flushing caches. Jan 23 00:57:35.002837 google-clock-skew[1724]: INFO Synced system time with hardware clock. Jan 23 00:57:35.071725 sshd[1765]: Accepted publickey for core from 4.153.228.146 port 51522 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:35.075015 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:35.084312 systemd-logind[1512]: New session 3 of user core. Jan 23 00:57:35.088494 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:57:35.241859 sshd[1776]: Connection closed by 4.153.228.146 port 51522 Jan 23 00:57:35.241643 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:35.250152 systemd[1]: sshd@2-10.128.0.101:22-4.153.228.146:51522.service: Deactivated successfully. Jan 23 00:57:35.253579 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:57:35.257788 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:57:35.261314 systemd-logind[1512]: Removed session 3. Jan 23 00:57:35.521225 kubelet[1763]: E0123 00:57:35.521053 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:35.524523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:35.524766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:35.525391 systemd[1]: kubelet.service: Consumed 1.176s CPU time, 258M memory peak. Jan 23 00:57:45.292647 systemd[1]: Started sshd@3-10.128.0.101:22-4.153.228.146:51954.service - OpenSSH per-connection server daemon (4.153.228.146:51954). Jan 23 00:57:45.548906 sshd[1784]: Accepted publickey for core from 4.153.228.146 port 51954 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:45.550663 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:45.551953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:57:45.554383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:45.562613 systemd-logind[1512]: New session 4 of user core. Jan 23 00:57:45.566901 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:57:45.735307 sshd[1790]: Connection closed by 4.153.228.146 port 51954 Jan 23 00:57:45.737570 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:45.743149 systemd[1]: sshd@3-10.128.0.101:22-4.153.228.146:51954.service: Deactivated successfully. Jan 23 00:57:45.745720 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:57:45.747384 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:57:45.750657 systemd-logind[1512]: Removed session 4. Jan 23 00:57:45.783737 systemd[1]: Started sshd@4-10.128.0.101:22-4.153.228.146:51960.service - OpenSSH per-connection server daemon (4.153.228.146:51960). Jan 23 00:57:45.924234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:45.939862 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:46.002372 kubelet[1804]: E0123 00:57:46.002299 1804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:46.008008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:46.008291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:46.009102 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110.9M memory peak. Jan 23 00:57:46.060864 sshd[1796]: Accepted publickey for core from 4.153.228.146 port 51960 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:46.062573 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:46.070336 systemd-logind[1512]: New session 5 of user core. Jan 23 00:57:46.078520 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:57:46.241168 sshd[1812]: Connection closed by 4.153.228.146 port 51960 Jan 23 00:57:46.242450 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:46.248311 systemd[1]: sshd@4-10.128.0.101:22-4.153.228.146:51960.service: Deactivated successfully. Jan 23 00:57:46.250957 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:57:46.252609 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:57:46.254281 systemd-logind[1512]: Removed session 5. Jan 23 00:57:46.290560 systemd[1]: Started sshd@5-10.128.0.101:22-4.153.228.146:51972.service - OpenSSH per-connection server daemon (4.153.228.146:51972). Jan 23 00:57:46.556552 sshd[1818]: Accepted publickey for core from 4.153.228.146 port 51972 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:46.558214 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:46.565333 systemd-logind[1512]: New session 6 of user core. Jan 23 00:57:46.576515 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:57:46.740208 sshd[1821]: Connection closed by 4.153.228.146 port 51972 Jan 23 00:57:46.741573 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:46.747029 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:57:46.747800 systemd[1]: sshd@5-10.128.0.101:22-4.153.228.146:51972.service: Deactivated successfully. Jan 23 00:57:46.750312 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:57:46.752692 systemd-logind[1512]: Removed session 6. Jan 23 00:57:46.792095 systemd[1]: Started sshd@6-10.128.0.101:22-4.153.228.146:51980.service - OpenSSH per-connection server daemon (4.153.228.146:51980). Jan 23 00:57:47.062860 sshd[1827]: Accepted publickey for core from 4.153.228.146 port 51980 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:47.064590 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:47.072186 systemd-logind[1512]: New session 7 of user core. Jan 23 00:57:47.077479 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:57:47.234023 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:57:47.234613 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:47.249562 sudo[1831]: pam_unix(sudo:session): session closed for user root Jan 23 00:57:47.285357 sshd[1830]: Connection closed by 4.153.228.146 port 51980 Jan 23 00:57:47.286433 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:47.292550 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:57:47.293015 systemd[1]: sshd@6-10.128.0.101:22-4.153.228.146:51980.service: Deactivated successfully. Jan 23 00:57:47.295600 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:57:47.297947 systemd-logind[1512]: Removed session 7. Jan 23 00:57:47.331518 systemd[1]: Started sshd@7-10.128.0.101:22-4.153.228.146:51982.service - OpenSSH per-connection server daemon (4.153.228.146:51982). Jan 23 00:57:47.589819 sshd[1837]: Accepted publickey for core from 4.153.228.146 port 51982 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:47.591558 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:47.598991 systemd-logind[1512]: New session 8 of user core. Jan 23 00:57:47.605465 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:57:47.745453 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:57:47.745933 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:47.752442 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 23 00:57:47.765772 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:57:47.766236 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:47.779594 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:57:47.825859 augenrules[1864]: No rules Jan 23 00:57:47.826730 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:57:47.827006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:57:47.828524 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 23 00:57:47.864253 sshd[1840]: Connection closed by 4.153.228.146 port 51982 Jan 23 00:57:47.865584 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:47.871811 systemd[1]: sshd@7-10.128.0.101:22-4.153.228.146:51982.service: Deactivated successfully. Jan 23 00:57:47.872175 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:57:47.874467 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:57:47.876844 systemd-logind[1512]: Removed session 8. Jan 23 00:57:47.909813 systemd[1]: Started sshd@8-10.128.0.101:22-4.153.228.146:51990.service - OpenSSH per-connection server daemon (4.153.228.146:51990). Jan 23 00:57:48.151784 sshd[1873]: Accepted publickey for core from 4.153.228.146 port 51990 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:57:48.153229 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:48.159336 systemd-logind[1512]: New session 9 of user core. Jan 23 00:57:48.165484 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:57:48.295900 sudo[1877]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:57:48.296411 sudo[1877]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:48.768535 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:57:48.784802 (dockerd)[1895]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:57:49.141320 dockerd[1895]: time="2026-01-23T00:57:49.140628339Z" level=info msg="Starting up" Jan 23 00:57:49.142735 dockerd[1895]: time="2026-01-23T00:57:49.142681920Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:57:49.158211 dockerd[1895]: time="2026-01-23T00:57:49.158110655Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:57:49.183494 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport715667251-merged.mount: Deactivated successfully. Jan 23 00:57:49.209899 dockerd[1895]: time="2026-01-23T00:57:49.209590583Z" level=info msg="Loading containers: start." Jan 23 00:57:49.228289 kernel: Initializing XFRM netlink socket Jan 23 00:57:49.571761 systemd-networkd[1427]: docker0: Link UP Jan 23 00:57:49.577818 dockerd[1895]: time="2026-01-23T00:57:49.577762236Z" level=info msg="Loading containers: done." Jan 23 00:57:49.597918 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2355493212-merged.mount: Deactivated successfully. Jan 23 00:57:49.602423 dockerd[1895]: time="2026-01-23T00:57:49.601778153Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:57:49.602423 dockerd[1895]: time="2026-01-23T00:57:49.601902228Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:57:49.602423 dockerd[1895]: time="2026-01-23T00:57:49.602024001Z" level=info msg="Initializing buildkit" Jan 23 00:57:49.631810 dockerd[1895]: time="2026-01-23T00:57:49.631740515Z" level=info msg="Completed buildkit initialization" Jan 23 00:57:49.640852 dockerd[1895]: time="2026-01-23T00:57:49.640776953Z" level=info msg="Daemon has completed initialization" Jan 23 00:57:49.641017 dockerd[1895]: time="2026-01-23T00:57:49.640851261Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:57:49.641377 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:57:50.569392 containerd[1536]: time="2026-01-23T00:57:50.569325726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 00:57:51.081613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188020623.mount: Deactivated successfully. Jan 23 00:57:52.606904 containerd[1536]: time="2026-01-23T00:57:52.606836451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:52.608233 containerd[1536]: time="2026-01-23T00:57:52.608176855Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27076160" Jan 23 00:57:52.609418 containerd[1536]: time="2026-01-23T00:57:52.609350295Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:52.613084 containerd[1536]: time="2026-01-23T00:57:52.612566167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:52.613872 containerd[1536]: time="2026-01-23T00:57:52.613826753Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.044449655s" Jan 23 00:57:52.613990 containerd[1536]: time="2026-01-23T00:57:52.613881036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 00:57:52.614868 containerd[1536]: time="2026-01-23T00:57:52.614842229Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 00:57:54.100913 containerd[1536]: time="2026-01-23T00:57:54.100849203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:54.102299 containerd[1536]: time="2026-01-23T00:57:54.102235746Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21164498" Jan 23 00:57:54.103247 containerd[1536]: time="2026-01-23T00:57:54.103174801Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:54.106402 containerd[1536]: time="2026-01-23T00:57:54.106343023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:54.108160 containerd[1536]: time="2026-01-23T00:57:54.107563542Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.492573048s" Jan 23 00:57:54.108160 containerd[1536]: time="2026-01-23T00:57:54.107610757Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 00:57:54.108363 containerd[1536]: time="2026-01-23T00:57:54.108314983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 00:57:55.420594 containerd[1536]: time="2026-01-23T00:57:55.420530234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:55.422060 containerd[1536]: time="2026-01-23T00:57:55.421942664Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15727967" Jan 23 00:57:55.423334 containerd[1536]: time="2026-01-23T00:57:55.423246347Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:55.427572 containerd[1536]: time="2026-01-23T00:57:55.427506892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:55.429035 containerd[1536]: time="2026-01-23T00:57:55.428814782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.320462545s" Jan 23 00:57:55.429035 containerd[1536]: time="2026-01-23T00:57:55.428860069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 00:57:55.429894 containerd[1536]: time="2026-01-23T00:57:55.429856096Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 00:57:56.025121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:57:56.029544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:56.353739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:56.371949 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:56.447092 kubelet[2182]: E0123 00:57:56.447039 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:56.451661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:56.452122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:56.453431 systemd[1]: kubelet.service: Consumed 248ms CPU time, 109.1M memory peak. Jan 23 00:57:56.596317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635789074.mount: Deactivated successfully. Jan 23 00:57:57.063702 containerd[1536]: time="2026-01-23T00:57:57.063636039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:57.065093 containerd[1536]: time="2026-01-23T00:57:57.064854564Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25967316" Jan 23 00:57:57.066494 containerd[1536]: time="2026-01-23T00:57:57.066457708Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:57.069067 containerd[1536]: time="2026-01-23T00:57:57.069006553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:57.070094 containerd[1536]: time="2026-01-23T00:57:57.069871152Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.639966675s" Jan 23 00:57:57.070094 containerd[1536]: time="2026-01-23T00:57:57.069912474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 00:57:57.070698 containerd[1536]: time="2026-01-23T00:57:57.070656281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 00:57:57.485212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692456109.mount: Deactivated successfully. Jan 23 00:57:58.747224 containerd[1536]: time="2026-01-23T00:57:58.747152748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.748657 containerd[1536]: time="2026-01-23T00:57:58.748603965Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22395089" Jan 23 00:57:58.749825 containerd[1536]: time="2026-01-23T00:57:58.749757951Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.753607 containerd[1536]: time="2026-01-23T00:57:58.753534409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.755586 containerd[1536]: time="2026-01-23T00:57:58.755132720Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.684439507s" Jan 23 00:57:58.755586 containerd[1536]: time="2026-01-23T00:57:58.755180889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 00:57:58.756419 containerd[1536]: time="2026-01-23T00:57:58.756369865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 00:57:59.122257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790859452.mount: Deactivated successfully. Jan 23 00:57:59.126795 containerd[1536]: time="2026-01-23T00:57:59.126738477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:59.127832 containerd[1536]: time="2026-01-23T00:57:59.127788495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=322216" Jan 23 00:57:59.129302 containerd[1536]: time="2026-01-23T00:57:59.128981221Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:59.131985 containerd[1536]: time="2026-01-23T00:57:59.131922504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:59.133443 containerd[1536]: time="2026-01-23T00:57:59.133396144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 376.988525ms" Jan 23 00:57:59.133443 containerd[1536]: time="2026-01-23T00:57:59.133437412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 00:57:59.134363 containerd[1536]: time="2026-01-23T00:57:59.134323339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 00:57:59.569013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341347659.mount: Deactivated successfully. Jan 23 00:58:02.355647 containerd[1536]: time="2026-01-23T00:58:02.355574084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:02.357215 containerd[1536]: time="2026-01-23T00:58:02.357175055Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74172832" Jan 23 00:58:02.358208 containerd[1536]: time="2026-01-23T00:58:02.358165974Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:02.365397 containerd[1536]: time="2026-01-23T00:58:02.365226881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:02.366539 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 00:58:02.367285 containerd[1536]: time="2026-01-23T00:58:02.367234476Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.232870257s" Jan 23 00:58:02.368900 containerd[1536]: time="2026-01-23T00:58:02.368823769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 00:58:06.525369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 00:58:06.531569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:06.743063 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:58:06.743185 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:58:06.743900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:06.744497 systemd[1]: kubelet.service: Consumed 101ms CPU time, 64.4M memory peak. Jan 23 00:58:06.749724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:06.790633 systemd[1]: Reload requested from client PID 2338 ('systemctl') (unit session-9.scope)... Jan 23 00:58:06.790858 systemd[1]: Reloading... Jan 23 00:58:06.945305 zram_generator::config[2379]: No configuration found. Jan 23 00:58:07.286356 systemd[1]: Reloading finished in 494 ms. Jan 23 00:58:07.365891 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:58:07.366023 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:58:07.366645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:07.366705 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.2M memory peak. Jan 23 00:58:07.370535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:07.664399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:07.680841 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:58:07.741733 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:58:07.741733 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:58:07.742239 kubelet[2433]: I0123 00:58:07.741816 2433 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:58:08.459546 kubelet[2433]: I0123 00:58:08.459484 2433 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:58:08.459546 kubelet[2433]: I0123 00:58:08.459521 2433 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:58:08.459546 kubelet[2433]: I0123 00:58:08.459564 2433 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:58:08.459808 kubelet[2433]: I0123 00:58:08.459574 2433 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:58:08.459953 kubelet[2433]: I0123 00:58:08.459915 2433 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:58:08.467110 kubelet[2433]: E0123 00:58:08.467030 2433 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:58:08.467886 kubelet[2433]: I0123 00:58:08.467696 2433 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:58:08.473782 kubelet[2433]: I0123 00:58:08.473751 2433 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:58:08.477842 kubelet[2433]: I0123 00:58:08.477805 2433 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:58:08.478177 kubelet[2433]: I0123 00:58:08.478128 2433 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:58:08.478424 kubelet[2433]: I0123 00:58:08.478159 2433 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:58:08.478424 kubelet[2433]: I0123 00:58:08.478420 2433 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:58:08.478656 kubelet[2433]: I0123 00:58:08.478437 2433 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:58:08.478656 kubelet[2433]: I0123 00:58:08.478558 2433 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:58:08.482667 kubelet[2433]: I0123 00:58:08.482634 2433 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:08.482927 kubelet[2433]: I0123 00:58:08.482906 2433 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:58:08.482992 kubelet[2433]: I0123 00:58:08.482945 2433 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:58:08.482992 kubelet[2433]: I0123 00:58:08.482987 2433 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:58:08.485792 kubelet[2433]: I0123 00:58:08.485761 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:58:08.492284 kubelet[2433]: E0123 00:58:08.491349 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512&limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:58:08.492284 kubelet[2433]: E0123 00:58:08.491893 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:58:08.492284 kubelet[2433]: I0123 00:58:08.492013 2433 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:58:08.493077 kubelet[2433]: I0123 00:58:08.493050 2433 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:58:08.493204 kubelet[2433]: I0123 00:58:08.493190 2433 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:58:08.493373 kubelet[2433]: W0123 00:58:08.493358 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:58:08.511643 kubelet[2433]: I0123 00:58:08.511615 2433 server.go:1262] "Started kubelet" Jan 23 00:58:08.513042 kubelet[2433]: I0123 00:58:08.513017 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:58:08.514018 kubelet[2433]: I0123 00:58:08.513955 2433 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:58:08.516026 kubelet[2433]: I0123 00:58:08.515805 2433 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:58:08.523409 kubelet[2433]: E0123 00:58:08.521421 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512.188d3642a6422568 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,UID:ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,},FirstTimestamp:2026-01-23 00:58:08.511567208 +0000 UTC m=+0.825805334,LastTimestamp:2026-01-23 00:58:08.511567208 +0000 UTC m=+0.825805334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,}" Jan 23 00:58:08.524290 kubelet[2433]: I0123 00:58:08.524201 2433 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:58:08.524404 kubelet[2433]: I0123 00:58:08.524344 2433 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:58:08.524607 kubelet[2433]: I0123 00:58:08.524579 2433 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:58:08.524988 kubelet[2433]: I0123 00:58:08.524956 2433 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:58:08.528494 kubelet[2433]: I0123 00:58:08.528463 2433 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:58:08.528722 kubelet[2433]: E0123 00:58:08.528691 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" Jan 23 00:58:08.529543 kubelet[2433]: I0123 00:58:08.529514 2433 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:58:08.529636 kubelet[2433]: I0123 00:58:08.529590 2433 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:58:08.531371 kubelet[2433]: E0123 00:58:08.530840 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:08.531371 kubelet[2433]: E0123 00:58:08.530960 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="200ms" Jan 23 00:58:08.535324 kubelet[2433]: I0123 00:58:08.533924 2433 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:58:08.535324 kubelet[2433]: I0123 00:58:08.533950 2433 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:58:08.535324 kubelet[2433]: I0123 00:58:08.534045 2433 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:58:08.552451 kubelet[2433]: I0123 00:58:08.552401 2433 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:58:08.554330 kubelet[2433]: I0123 00:58:08.554302 2433 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:58:08.554330 kubelet[2433]: I0123 00:58:08.554330 2433 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:58:08.554489 kubelet[2433]: I0123 00:58:08.554371 2433 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:58:08.554489 kubelet[2433]: E0123 00:58:08.554432 2433 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:58:08.568799 kubelet[2433]: E0123 00:58:08.568746 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:58:08.569724 kubelet[2433]: E0123 00:58:08.569683 2433 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:58:08.578485 kubelet[2433]: I0123 00:58:08.578419 2433 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:58:08.578485 kubelet[2433]: I0123 00:58:08.578463 2433 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:58:08.578485 kubelet[2433]: I0123 00:58:08.578487 2433 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:08.580715 kubelet[2433]: I0123 00:58:08.580674 2433 policy_none.go:49] "None policy: Start" Jan 23 00:58:08.580715 kubelet[2433]: I0123 00:58:08.580699 2433 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:58:08.580715 kubelet[2433]: I0123 00:58:08.580716 2433 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:58:08.582555 kubelet[2433]: I0123 00:58:08.582515 2433 policy_none.go:47] "Start" Jan 23 00:58:08.595744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:58:08.616942 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:58:08.629026 kubelet[2433]: E0123 00:58:08.628965 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" Jan 23 00:58:08.642033 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:58:08.645325 kubelet[2433]: E0123 00:58:08.644620 2433 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:58:08.645325 kubelet[2433]: I0123 00:58:08.644883 2433 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:58:08.645325 kubelet[2433]: I0123 00:58:08.644900 2433 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:58:08.646732 kubelet[2433]: I0123 00:58:08.646689 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:58:08.648154 kubelet[2433]: E0123 00:58:08.648079 2433 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:58:08.648154 kubelet[2433]: E0123 00:58:08.648130 2433 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" Jan 23 00:58:08.675793 systemd[1]: Created slice kubepods-burstable-pod4c26264afe4d752961c147a85ba6dba3.slice - libcontainer container kubepods-burstable-pod4c26264afe4d752961c147a85ba6dba3.slice. Jan 23 00:58:08.688557 kubelet[2433]: E0123 00:58:08.688229 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.693883 systemd[1]: Created slice kubepods-burstable-pod60fa2616428c20cd727dc4fe435b6a13.slice - libcontainer container kubepods-burstable-pod60fa2616428c20cd727dc4fe435b6a13.slice. Jan 23 00:58:08.703289 kubelet[2433]: E0123 00:58:08.703069 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.707221 systemd[1]: Created slice kubepods-burstable-podadccd5f879ca1ed1ab8f149937e07884.slice - libcontainer container kubepods-burstable-podadccd5f879ca1ed1ab8f149937e07884.slice. Jan 23 00:58:08.710310 kubelet[2433]: E0123 00:58:08.710172 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.731688 kubelet[2433]: E0123 00:58:08.731633 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="400ms" Jan 23 00:58:08.750547 kubelet[2433]: I0123 00:58:08.750406 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.751055 kubelet[2433]: E0123 00:58:08.750837 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830454 kubelet[2433]: I0123 00:58:08.830374 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830454 kubelet[2433]: I0123 00:58:08.830444 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830690 kubelet[2433]: I0123 00:58:08.830472 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830690 kubelet[2433]: I0123 00:58:08.830499 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830690 kubelet[2433]: I0123 00:58:08.830527 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c26264afe4d752961c147a85ba6dba3-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"4c26264afe4d752961c147a85ba6dba3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830690 kubelet[2433]: I0123 00:58:08.830554 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c26264afe4d752961c147a85ba6dba3-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"4c26264afe4d752961c147a85ba6dba3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830895 kubelet[2433]: I0123 00:58:08.830581 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830895 kubelet[2433]: I0123 00:58:08.830607 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adccd5f879ca1ed1ab8f149937e07884-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"adccd5f879ca1ed1ab8f149937e07884\") " pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.830895 kubelet[2433]: I0123 00:58:08.830647 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c26264afe4d752961c147a85ba6dba3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"4c26264afe4d752961c147a85ba6dba3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.956499 kubelet[2433]: I0123 00:58:08.956454 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.956948 kubelet[2433]: E0123 00:58:08.956893 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:08.992868 containerd[1536]: time="2026-01-23T00:58:08.992727639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,Uid:4c26264afe4d752961c147a85ba6dba3,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:09.007450 containerd[1536]: time="2026-01-23T00:58:09.007040424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,Uid:60fa2616428c20cd727dc4fe435b6a13,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:09.016076 containerd[1536]: time="2026-01-23T00:58:09.016030873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,Uid:adccd5f879ca1ed1ab8f149937e07884,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:09.132890 kubelet[2433]: E0123 00:58:09.132834 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="800ms" Jan 23 00:58:09.362453 kubelet[2433]: I0123 00:58:09.362305 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:09.363135 kubelet[2433]: E0123 00:58:09.363080 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.101:6443/api/v1/nodes\": dial tcp 10.128.0.101:6443: connect: connection refused" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:09.380088 kubelet[2433]: E0123 00:58:09.380042 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:58:09.387487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040111689.mount: Deactivated successfully. Jan 23 00:58:09.394354 containerd[1536]: time="2026-01-23T00:58:09.394301755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:09.397845 containerd[1536]: time="2026-01-23T00:58:09.397523335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322136" Jan 23 00:58:09.398849 containerd[1536]: time="2026-01-23T00:58:09.398789385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:09.401173 containerd[1536]: time="2026-01-23T00:58:09.400181068Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:09.401843 containerd[1536]: time="2026-01-23T00:58:09.401786152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:09.402942 containerd[1536]: time="2026-01-23T00:58:09.402868088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:58:09.403926 containerd[1536]: time="2026-01-23T00:58:09.403870962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:58:09.405298 containerd[1536]: time="2026-01-23T00:58:09.404972335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:09.406164 containerd[1536]: time="2026-01-23T00:58:09.406101734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 411.0907ms" Jan 23 00:58:09.409856 containerd[1536]: time="2026-01-23T00:58:09.409370586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 400.738263ms" Jan 23 00:58:09.416688 containerd[1536]: time="2026-01-23T00:58:09.416636433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 397.49364ms" Jan 23 00:58:09.437393 containerd[1536]: time="2026-01-23T00:58:09.437342326Z" level=info msg="connecting to shim 26f6a5666b6d27e9816a493f3c08f76337cfcd757c4b4b394bbe73d1eb9c4403" address="unix:///run/containerd/s/703a54c0471ad058271f88ac6deaceaf0ee17568f5fb60a44d4c869a06c399b7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:09.472150 containerd[1536]: time="2026-01-23T00:58:09.472086151Z" level=info msg="connecting to shim e00c2843dd3ab0891dd46e965bf71abbe9a969135f0c06f3b2b138d38df1d34f" address="unix:///run/containerd/s/f1a8a61ff3b959045d7d5e5d3187f37a7a6e34547a0934f394cdfbda29305639" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:09.475604 kubelet[2433]: E0123 00:58:09.474449 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:58:09.481525 containerd[1536]: time="2026-01-23T00:58:09.481473150Z" level=info msg="connecting to shim 86b00b2b358e1972d1f7a5c6c3d69757df70c608a449219cf8764c0ad6b7fc3f" address="unix:///run/containerd/s/f6e0a1dd235b18711fa382dcc97db6c7179be2dd120cb1f2f9b672a50f2258e1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:09.525722 systemd[1]: Started cri-containerd-26f6a5666b6d27e9816a493f3c08f76337cfcd757c4b4b394bbe73d1eb9c4403.scope - libcontainer container 26f6a5666b6d27e9816a493f3c08f76337cfcd757c4b4b394bbe73d1eb9c4403. Jan 23 00:58:09.536388 systemd[1]: Started cri-containerd-86b00b2b358e1972d1f7a5c6c3d69757df70c608a449219cf8764c0ad6b7fc3f.scope - libcontainer container 86b00b2b358e1972d1f7a5c6c3d69757df70c608a449219cf8764c0ad6b7fc3f. Jan 23 00:58:09.548548 systemd[1]: Started cri-containerd-e00c2843dd3ab0891dd46e965bf71abbe9a969135f0c06f3b2b138d38df1d34f.scope - libcontainer container e00c2843dd3ab0891dd46e965bf71abbe9a969135f0c06f3b2b138d38df1d34f. Jan 23 00:58:09.579574 kubelet[2433]: E0123 00:58:09.579498 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512&limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:58:09.660357 containerd[1536]: time="2026-01-23T00:58:09.659359563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,Uid:adccd5f879ca1ed1ab8f149937e07884,Namespace:kube-system,Attempt:0,} returns sandbox id \"86b00b2b358e1972d1f7a5c6c3d69757df70c608a449219cf8764c0ad6b7fc3f\"" Jan 23 00:58:09.664219 kubelet[2433]: E0123 00:58:09.664175 2433 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008db" Jan 23 00:58:09.674857 containerd[1536]: time="2026-01-23T00:58:09.674334511Z" level=info msg="CreateContainer within sandbox \"86b00b2b358e1972d1f7a5c6c3d69757df70c608a449219cf8764c0ad6b7fc3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:58:09.687477 containerd[1536]: time="2026-01-23T00:58:09.687433479Z" level=info msg="Container bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:09.689691 containerd[1536]: time="2026-01-23T00:58:09.689653297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,Uid:4c26264afe4d752961c147a85ba6dba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"26f6a5666b6d27e9816a493f3c08f76337cfcd757c4b4b394bbe73d1eb9c4403\"" Jan 23 00:58:09.693194 kubelet[2433]: E0123 00:58:09.693139 2433 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008db" Jan 23 00:58:09.699852 containerd[1536]: time="2026-01-23T00:58:09.699808123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512,Uid:60fa2616428c20cd727dc4fe435b6a13,Namespace:kube-system,Attempt:0,} returns sandbox id \"e00c2843dd3ab0891dd46e965bf71abbe9a969135f0c06f3b2b138d38df1d34f\"" Jan 23 00:58:09.700727 containerd[1536]: time="2026-01-23T00:58:09.700673186Z" level=info msg="CreateContainer within sandbox \"86b00b2b358e1972d1f7a5c6c3d69757df70c608a449219cf8764c0ad6b7fc3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56\"" Jan 23 00:58:09.701739 containerd[1536]: time="2026-01-23T00:58:09.701693837Z" level=info msg="StartContainer for \"bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56\"" Jan 23 00:58:09.702867 containerd[1536]: time="2026-01-23T00:58:09.702803545Z" level=info msg="connecting to shim bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56" address="unix:///run/containerd/s/f6e0a1dd235b18711fa382dcc97db6c7179be2dd120cb1f2f9b672a50f2258e1" protocol=ttrpc version=3 Jan 23 00:58:09.703432 containerd[1536]: time="2026-01-23T00:58:09.703397399Z" level=info msg="CreateContainer within sandbox \"26f6a5666b6d27e9816a493f3c08f76337cfcd757c4b4b394bbe73d1eb9c4403\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:58:09.703615 kubelet[2433]: E0123 00:58:09.703572 2433 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc" Jan 23 00:58:09.715302 containerd[1536]: time="2026-01-23T00:58:09.714670016Z" level=info msg="Container 756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:09.726438 containerd[1536]: time="2026-01-23T00:58:09.726397728Z" level=info msg="CreateContainer within sandbox \"e00c2843dd3ab0891dd46e965bf71abbe9a969135f0c06f3b2b138d38df1d34f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:58:09.729659 systemd[1]: Started cri-containerd-bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56.scope - libcontainer container bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56. Jan 23 00:58:09.750785 containerd[1536]: time="2026-01-23T00:58:09.750724387Z" level=info msg="CreateContainer within sandbox \"26f6a5666b6d27e9816a493f3c08f76337cfcd757c4b4b394bbe73d1eb9c4403\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b\"" Jan 23 00:58:09.754302 containerd[1536]: time="2026-01-23T00:58:09.754151376Z" level=info msg="Container fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:09.755168 containerd[1536]: time="2026-01-23T00:58:09.755125892Z" level=info msg="StartContainer for \"756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b\"" Jan 23 00:58:09.757039 containerd[1536]: time="2026-01-23T00:58:09.756948743Z" level=info msg="connecting to shim 756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b" address="unix:///run/containerd/s/703a54c0471ad058271f88ac6deaceaf0ee17568f5fb60a44d4c869a06c399b7" protocol=ttrpc version=3 Jan 23 00:58:09.774202 containerd[1536]: time="2026-01-23T00:58:09.774119639Z" level=info msg="CreateContainer within sandbox \"e00c2843dd3ab0891dd46e965bf71abbe9a969135f0c06f3b2b138d38df1d34f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950\"" Jan 23 00:58:09.775484 containerd[1536]: time="2026-01-23T00:58:09.775414511Z" level=info msg="StartContainer for \"fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950\"" Jan 23 00:58:09.777814 containerd[1536]: time="2026-01-23T00:58:09.777752924Z" level=info msg="connecting to shim fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950" address="unix:///run/containerd/s/f1a8a61ff3b959045d7d5e5d3187f37a7a6e34547a0934f394cdfbda29305639" protocol=ttrpc version=3 Jan 23 00:58:09.803500 systemd[1]: Started cri-containerd-756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b.scope - libcontainer container 756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b. Jan 23 00:58:09.818507 systemd[1]: Started cri-containerd-fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950.scope - libcontainer container fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950. Jan 23 00:58:09.854292 kubelet[2433]: E0123 00:58:09.854205 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:09.908061 containerd[1536]: time="2026-01-23T00:58:09.907673536Z" level=info msg="StartContainer for \"bf69997d99138a2f712d6ffcc7074eb869deafb2e6ef8226ba32ec393f0bbe56\" returns successfully" Jan 23 00:58:09.931319 containerd[1536]: time="2026-01-23T00:58:09.930389364Z" level=info msg="StartContainer for \"756131929b7519fd590daf3c66a10901be70a6795aac6366dd90a3cfecdb738b\" returns successfully" Jan 23 00:58:09.933870 kubelet[2433]: E0123 00:58:09.933814 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512?timeout=10s\": dial tcp 10.128.0.101:6443: connect: connection refused" interval="1.6s" Jan 23 00:58:09.976894 containerd[1536]: time="2026-01-23T00:58:09.976837910Z" level=info msg="StartContainer for \"fb59ed2b2e247f6c1a4fc5b64d1e6e3cca834dc26a9270fc4991f0a6af829950\" returns successfully" Jan 23 00:58:10.167478 kubelet[2433]: I0123 00:58:10.167440 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:10.589195 kubelet[2433]: E0123 00:58:10.589158 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:10.589809 kubelet[2433]: E0123 00:58:10.589777 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:10.605292 kubelet[2433]: E0123 00:58:10.604645 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:11.603973 kubelet[2433]: E0123 00:58:11.603912 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:11.607174 kubelet[2433]: E0123 00:58:11.605161 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:12.609970 kubelet[2433]: E0123 00:58:12.609921 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.301188 kubelet[2433]: E0123 00:58:13.301131 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" not found" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.371617 kubelet[2433]: I0123 00:58:13.371547 2433 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.429666 kubelet[2433]: I0123 00:58:13.429619 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.458652 kubelet[2433]: E0123 00:58:13.458363 2433 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.458652 kubelet[2433]: I0123 00:58:13.458411 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.468588 kubelet[2433]: E0123 00:58:13.468497 2433 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.468588 kubelet[2433]: I0123 00:58:13.468577 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.473395 kubelet[2433]: E0123 00:58:13.473350 2433 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:13.495769 kubelet[2433]: I0123 00:58:13.495717 2433 apiserver.go:52] "Watching apiserver" Jan 23 00:58:13.530498 kubelet[2433]: I0123 00:58:13.530446 2433 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:58:14.521206 kubelet[2433]: I0123 00:58:14.521157 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:14.530518 kubelet[2433]: I0123 00:58:14.530474 2433 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 23 00:58:15.669174 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-9.scope)... Jan 23 00:58:15.669201 systemd[1]: Reloading... Jan 23 00:58:15.829323 zram_generator::config[2760]: No configuration found. Jan 23 00:58:16.237784 systemd[1]: Reloading finished in 567 ms. Jan 23 00:58:16.271661 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:16.299733 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:58:16.300335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:16.300494 systemd[1]: kubelet.service: Consumed 1.355s CPU time, 123.2M memory peak. Jan 23 00:58:16.303081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:16.679244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:16.694852 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:58:16.766651 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:58:16.767107 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:58:16.767205 kubelet[2813]: I0123 00:58:16.767152 2813 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:58:16.778914 kubelet[2813]: I0123 00:58:16.778871 2813 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:58:16.778914 kubelet[2813]: I0123 00:58:16.778908 2813 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:58:16.779116 kubelet[2813]: I0123 00:58:16.778942 2813 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:58:16.779116 kubelet[2813]: I0123 00:58:16.778951 2813 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:58:16.780289 kubelet[2813]: I0123 00:58:16.779311 2813 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:58:16.781874 kubelet[2813]: I0123 00:58:16.781697 2813 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 00:58:16.793317 kubelet[2813]: I0123 00:58:16.792997 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:58:16.799155 kubelet[2813]: I0123 00:58:16.799124 2813 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:58:16.803949 kubelet[2813]: I0123 00:58:16.803393 2813 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:58:16.803949 kubelet[2813]: I0123 00:58:16.803677 2813 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:58:16.803949 kubelet[2813]: I0123 00:58:16.803713 2813 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:58:16.803949 kubelet[2813]: I0123 00:58:16.803929 2813 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:58:16.804331 kubelet[2813]: I0123 00:58:16.803943 2813 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:58:16.804331 kubelet[2813]: I0123 00:58:16.803974 2813 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:58:16.805192 kubelet[2813]: I0123 00:58:16.805169 2813 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:16.805415 kubelet[2813]: I0123 00:58:16.805397 2813 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:58:16.805509 kubelet[2813]: I0123 00:58:16.805424 2813 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:58:16.805509 kubelet[2813]: I0123 00:58:16.805457 2813 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:58:16.805509 kubelet[2813]: I0123 00:58:16.805485 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:58:16.812045 kubelet[2813]: I0123 00:58:16.812020 2813 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:58:16.812954 kubelet[2813]: I0123 00:58:16.812926 2813 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:58:16.815705 kubelet[2813]: I0123 00:58:16.815671 2813 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:58:16.852317 kubelet[2813]: I0123 00:58:16.851744 2813 server.go:1262] "Started kubelet" Jan 23 00:58:16.853195 kubelet[2813]: I0123 00:58:16.853146 2813 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:58:16.853653 kubelet[2813]: I0123 00:58:16.853609 2813 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:58:16.854020 kubelet[2813]: I0123 00:58:16.853805 2813 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:58:16.854367 kubelet[2813]: I0123 00:58:16.854346 2813 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:58:16.859282 kubelet[2813]: I0123 00:58:16.857643 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:58:16.859545 kubelet[2813]: I0123 00:58:16.859517 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:58:16.861609 kubelet[2813]: I0123 00:58:16.861294 2813 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:58:16.868365 kubelet[2813]: I0123 00:58:16.868329 2813 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:58:16.878083 kubelet[2813]: I0123 00:58:16.878048 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:58:16.878254 kubelet[2813]: I0123 00:58:16.878234 2813 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:58:16.883209 kubelet[2813]: I0123 00:58:16.881966 2813 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:58:16.883209 kubelet[2813]: I0123 00:58:16.882132 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:58:16.890015 kubelet[2813]: I0123 00:58:16.889878 2813 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:58:16.909527 kubelet[2813]: I0123 00:58:16.909338 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:58:16.912032 kubelet[2813]: I0123 00:58:16.912000 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:58:16.912326 kubelet[2813]: I0123 00:58:16.912173 2813 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:58:16.912326 kubelet[2813]: I0123 00:58:16.912204 2813 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:58:16.912326 kubelet[2813]: E0123 00:58:16.912254 2813 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:58:16.998174 kubelet[2813]: I0123 00:58:16.996309 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:58:16.998174 kubelet[2813]: I0123 00:58:16.996329 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:58:16.998174 kubelet[2813]: I0123 00:58:16.996355 2813 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998656 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998697 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998723 2813 policy_none.go:49] "None policy: Start" Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998748 2813 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998766 2813 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998912 2813 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 00:58:16.999565 kubelet[2813]: I0123 00:58:16.998924 2813 policy_none.go:47] "Start" Jan 23 00:58:17.008287 kubelet[2813]: E0123 00:58:17.008207 2813 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:58:17.008610 kubelet[2813]: I0123 00:58:17.008592 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:58:17.008744 kubelet[2813]: I0123 00:58:17.008710 2813 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:58:17.010608 kubelet[2813]: I0123 00:58:17.010357 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:58:17.015946 kubelet[2813]: I0123 00:58:17.015017 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.015946 kubelet[2813]: E0123 00:58:17.015244 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:58:17.018667 kubelet[2813]: I0123 00:58:17.018638 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.020182 kubelet[2813]: I0123 00:58:17.020155 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.035306 kubelet[2813]: I0123 00:58:17.034910 2813 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 23 00:58:17.037904 kubelet[2813]: I0123 00:58:17.037849 2813 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 23 00:58:17.039103 kubelet[2813]: I0123 00:58:17.039075 2813 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 23 00:58:17.039206 kubelet[2813]: E0123 00:58:17.039141 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.080208 kubelet[2813]: I0123 00:58:17.080160 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.080638 kubelet[2813]: I0123 00:58:17.080550 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adccd5f879ca1ed1ab8f149937e07884-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"adccd5f879ca1ed1ab8f149937e07884\") " pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.080852 kubelet[2813]: I0123 00:58:17.080757 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c26264afe4d752961c147a85ba6dba3-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"4c26264afe4d752961c147a85ba6dba3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.080852 kubelet[2813]: I0123 00:58:17.080786 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c26264afe4d752961c147a85ba6dba3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"4c26264afe4d752961c147a85ba6dba3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.081084 kubelet[2813]: I0123 00:58:17.080966 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.081084 kubelet[2813]: I0123 00:58:17.080991 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.081084 kubelet[2813]: I0123 00:58:17.081036 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.081084 kubelet[2813]: I0123 00:58:17.081055 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c26264afe4d752961c147a85ba6dba3-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"4c26264afe4d752961c147a85ba6dba3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.081309 kubelet[2813]: I0123 00:58:17.081168 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60fa2616428c20cd727dc4fe435b6a13-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" (UID: \"60fa2616428c20cd727dc4fe435b6a13\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.125536 kubelet[2813]: I0123 00:58:17.125505 2813 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.138097 kubelet[2813]: I0123 00:58:17.138045 2813 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.138454 kubelet[2813]: I0123 00:58:17.138234 2813 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.160812 update_engine[1513]: I20260123 00:58:17.160723 1513 update_attempter.cc:509] Updating boot flags... Jan 23 00:58:17.808004 kubelet[2813]: I0123 00:58:17.807954 2813 apiserver.go:52] "Watching apiserver" Jan 23 00:58:17.879172 kubelet[2813]: I0123 00:58:17.879113 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:58:17.981027 kubelet[2813]: I0123 00:58:17.980468 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.981188 kubelet[2813]: I0123 00:58:17.981141 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.994293 kubelet[2813]: I0123 00:58:17.993123 2813 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 23 00:58:17.994293 kubelet[2813]: E0123 00:58:17.993193 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:17.996531 kubelet[2813]: I0123 00:58:17.996501 2813 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 23 00:58:17.996951 kubelet[2813]: E0123 00:58:17.996727 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:58:18.040046 kubelet[2813]: I0123 00:58:18.039955 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" podStartSLOduration=1.039931599 podStartE2EDuration="1.039931599s" podCreationTimestamp="2026-01-23 00:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:18.02387318 +0000 UTC m=+1.321559880" watchObservedRunningTime="2026-01-23 00:58:18.039931599 +0000 UTC m=+1.337618301" Jan 23 00:58:18.058145 kubelet[2813]: I0123 00:58:18.057964 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" podStartSLOduration=1.057940458 podStartE2EDuration="1.057940458s" podCreationTimestamp="2026-01-23 00:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:18.041427909 +0000 UTC m=+1.339114597" watchObservedRunningTime="2026-01-23 00:58:18.057940458 +0000 UTC m=+1.355627152" Jan 23 00:58:18.074740 kubelet[2813]: I0123 00:58:18.074642 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" podStartSLOduration=4.074619078 podStartE2EDuration="4.074619078s" podCreationTimestamp="2026-01-23 00:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:18.059000463 +0000 UTC m=+1.356687162" watchObservedRunningTime="2026-01-23 00:58:18.074619078 +0000 UTC m=+1.372305781" Jan 23 00:58:21.022149 kubelet[2813]: I0123 00:58:21.022089 2813 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:58:21.023394 kubelet[2813]: I0123 00:58:21.022821 2813 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:58:21.023452 containerd[1536]: time="2026-01-23T00:58:21.022572508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:58:21.929541 systemd[1]: Created slice kubepods-besteffort-podf7ab7392_6e54_47e6_a7e0_cb37d434b306.slice - libcontainer container kubepods-besteffort-podf7ab7392_6e54_47e6_a7e0_cb37d434b306.slice. Jan 23 00:58:22.019665 kubelet[2813]: I0123 00:58:22.019248 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7ab7392-6e54-47e6-a7e0-cb37d434b306-lib-modules\") pod \"kube-proxy-kr4s6\" (UID: \"f7ab7392-6e54-47e6-a7e0-cb37d434b306\") " pod="kube-system/kube-proxy-kr4s6" Jan 23 00:58:22.019665 kubelet[2813]: I0123 00:58:22.019504 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7k6n\" (UniqueName: \"kubernetes.io/projected/f7ab7392-6e54-47e6-a7e0-cb37d434b306-kube-api-access-c7k6n\") pod \"kube-proxy-kr4s6\" (UID: \"f7ab7392-6e54-47e6-a7e0-cb37d434b306\") " pod="kube-system/kube-proxy-kr4s6" Jan 23 00:58:22.019966 kubelet[2813]: I0123 00:58:22.019682 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7ab7392-6e54-47e6-a7e0-cb37d434b306-kube-proxy\") pod \"kube-proxy-kr4s6\" (UID: \"f7ab7392-6e54-47e6-a7e0-cb37d434b306\") " pod="kube-system/kube-proxy-kr4s6" Jan 23 00:58:22.019966 kubelet[2813]: I0123 00:58:22.019762 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7ab7392-6e54-47e6-a7e0-cb37d434b306-xtables-lock\") pod \"kube-proxy-kr4s6\" (UID: \"f7ab7392-6e54-47e6-a7e0-cb37d434b306\") " pod="kube-system/kube-proxy-kr4s6" Jan 23 00:58:22.177327 systemd[1]: Created slice kubepods-besteffort-pod027798ce_3d7e_4fa4_bc8c_9ccc8cdab143.slice - libcontainer container kubepods-besteffort-pod027798ce_3d7e_4fa4_bc8c_9ccc8cdab143.slice. Jan 23 00:58:22.222148 kubelet[2813]: I0123 00:58:22.221981 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/027798ce-3d7e-4fa4-bc8c-9ccc8cdab143-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-7jh8p\" (UID: \"027798ce-3d7e-4fa4-bc8c-9ccc8cdab143\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7jh8p" Jan 23 00:58:22.222148 kubelet[2813]: I0123 00:58:22.222041 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmc8\" (UniqueName: \"kubernetes.io/projected/027798ce-3d7e-4fa4-bc8c-9ccc8cdab143-kube-api-access-tcmc8\") pod \"tigera-operator-65cdcdfd6d-7jh8p\" (UID: \"027798ce-3d7e-4fa4-bc8c-9ccc8cdab143\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7jh8p" Jan 23 00:58:22.244767 containerd[1536]: time="2026-01-23T00:58:22.244700315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr4s6,Uid:f7ab7392-6e54-47e6-a7e0-cb37d434b306,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:22.272132 containerd[1536]: time="2026-01-23T00:58:22.272061389Z" level=info msg="connecting to shim a59c5623a2f994b7ea9b39a8044fc00bcd533973c12a14f6387216daa26ffc6d" address="unix:///run/containerd/s/c18d9605f60d13403317a6db0b2eab5f591f00fd4fd210c5b5d7b28b9c7a75a4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:22.307494 systemd[1]: Started cri-containerd-a59c5623a2f994b7ea9b39a8044fc00bcd533973c12a14f6387216daa26ffc6d.scope - libcontainer container a59c5623a2f994b7ea9b39a8044fc00bcd533973c12a14f6387216daa26ffc6d. Jan 23 00:58:22.361606 containerd[1536]: time="2026-01-23T00:58:22.361555376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr4s6,Uid:f7ab7392-6e54-47e6-a7e0-cb37d434b306,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59c5623a2f994b7ea9b39a8044fc00bcd533973c12a14f6387216daa26ffc6d\"" Jan 23 00:58:22.371145 containerd[1536]: time="2026-01-23T00:58:22.370463486Z" level=info msg="CreateContainer within sandbox \"a59c5623a2f994b7ea9b39a8044fc00bcd533973c12a14f6387216daa26ffc6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:58:22.393592 containerd[1536]: time="2026-01-23T00:58:22.393341795Z" level=info msg="Container 98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:22.405880 containerd[1536]: time="2026-01-23T00:58:22.405814985Z" level=info msg="CreateContainer within sandbox \"a59c5623a2f994b7ea9b39a8044fc00bcd533973c12a14f6387216daa26ffc6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3\"" Jan 23 00:58:22.407380 containerd[1536]: time="2026-01-23T00:58:22.407232408Z" level=info msg="StartContainer for \"98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3\"" Jan 23 00:58:22.409814 containerd[1536]: time="2026-01-23T00:58:22.409702893Z" level=info msg="connecting to shim 98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3" address="unix:///run/containerd/s/c18d9605f60d13403317a6db0b2eab5f591f00fd4fd210c5b5d7b28b9c7a75a4" protocol=ttrpc version=3 Jan 23 00:58:22.438463 systemd[1]: Started cri-containerd-98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3.scope - libcontainer container 98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3. Jan 23 00:58:22.487709 containerd[1536]: time="2026-01-23T00:58:22.487134790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7jh8p,Uid:027798ce-3d7e-4fa4-bc8c-9ccc8cdab143,Namespace:tigera-operator,Attempt:0,}" Jan 23 00:58:22.517161 containerd[1536]: time="2026-01-23T00:58:22.516901670Z" level=info msg="connecting to shim 176e9d57779721bed7bd3fe15cc7d37d81d6afa3ebe475814afe954abefac689" address="unix:///run/containerd/s/85649ca28e98c7dda0ffce87087e8333acb9f67eb6de005d5a262890dc50723e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:22.565731 systemd[1]: Started cri-containerd-176e9d57779721bed7bd3fe15cc7d37d81d6afa3ebe475814afe954abefac689.scope - libcontainer container 176e9d57779721bed7bd3fe15cc7d37d81d6afa3ebe475814afe954abefac689. Jan 23 00:58:22.569252 containerd[1536]: time="2026-01-23T00:58:22.569183064Z" level=info msg="StartContainer for \"98e2e26b616f4ab0b8b157769a35494e08bd48021dc38ecc1b398db1ea34b3a3\" returns successfully" Jan 23 00:58:22.656533 containerd[1536]: time="2026-01-23T00:58:22.656431324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7jh8p,Uid:027798ce-3d7e-4fa4-bc8c-9ccc8cdab143,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"176e9d57779721bed7bd3fe15cc7d37d81d6afa3ebe475814afe954abefac689\"" Jan 23 00:58:22.659749 containerd[1536]: time="2026-01-23T00:58:22.659709307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 00:58:23.021004 kubelet[2813]: I0123 00:58:23.020757 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kr4s6" podStartSLOduration=2.02073124 podStartE2EDuration="2.02073124s" podCreationTimestamp="2026-01-23 00:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:23.020371102 +0000 UTC m=+6.318057803" watchObservedRunningTime="2026-01-23 00:58:23.02073124 +0000 UTC m=+6.318417942" Jan 23 00:58:23.156386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37935841.mount: Deactivated successfully. Jan 23 00:58:24.189541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039702893.mount: Deactivated successfully. Jan 23 00:58:25.124396 containerd[1536]: time="2026-01-23T00:58:25.124328708Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.125731 containerd[1536]: time="2026-01-23T00:58:25.125634092Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 00:58:25.127004 containerd[1536]: time="2026-01-23T00:58:25.126957267Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.131036 containerd[1536]: time="2026-01-23T00:58:25.129983894Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:25.131036 containerd[1536]: time="2026-01-23T00:58:25.130873752Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.470940792s" Jan 23 00:58:25.131036 containerd[1536]: time="2026-01-23T00:58:25.130925650Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 00:58:25.136947 containerd[1536]: time="2026-01-23T00:58:25.136784180Z" level=info msg="CreateContainer within sandbox \"176e9d57779721bed7bd3fe15cc7d37d81d6afa3ebe475814afe954abefac689\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 00:58:25.149305 containerd[1536]: time="2026-01-23T00:58:25.147438449Z" level=info msg="Container 38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:25.159875 containerd[1536]: time="2026-01-23T00:58:25.159815151Z" level=info msg="CreateContainer within sandbox \"176e9d57779721bed7bd3fe15cc7d37d81d6afa3ebe475814afe954abefac689\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d\"" Jan 23 00:58:25.160984 containerd[1536]: time="2026-01-23T00:58:25.160931701Z" level=info msg="StartContainer for \"38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d\"" Jan 23 00:58:25.162576 containerd[1536]: time="2026-01-23T00:58:25.162540339Z" level=info msg="connecting to shim 38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d" address="unix:///run/containerd/s/85649ca28e98c7dda0ffce87087e8333acb9f67eb6de005d5a262890dc50723e" protocol=ttrpc version=3 Jan 23 00:58:25.195466 systemd[1]: Started cri-containerd-38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d.scope - libcontainer container 38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d. Jan 23 00:58:25.241005 containerd[1536]: time="2026-01-23T00:58:25.240951036Z" level=info msg="StartContainer for \"38adb70301dcff419f0e7c9ae106e8f443e30118f356eae4abe701b18874cb4d\" returns successfully" Jan 23 00:58:26.020102 kubelet[2813]: I0123 00:58:26.019977 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-7jh8p" podStartSLOduration=1.547171249 podStartE2EDuration="4.019953495s" podCreationTimestamp="2026-01-23 00:58:22 +0000 UTC" firstStartedPulling="2026-01-23 00:58:22.659299403 +0000 UTC m=+5.956986094" lastFinishedPulling="2026-01-23 00:58:25.132081665 +0000 UTC m=+8.429768340" observedRunningTime="2026-01-23 00:58:26.019584864 +0000 UTC m=+9.317271565" watchObservedRunningTime="2026-01-23 00:58:26.019953495 +0000 UTC m=+9.317640192" Jan 23 00:58:32.874954 sudo[1877]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:32.908495 sshd[1876]: Connection closed by 4.153.228.146 port 51990 Jan 23 00:58:32.911586 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:32.927570 systemd[1]: sshd@8-10.128.0.101:22-4.153.228.146:51990.service: Deactivated successfully. Jan 23 00:58:32.937018 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:58:32.937884 systemd[1]: session-9.scope: Consumed 7.145s CPU time, 235.2M memory peak. Jan 23 00:58:32.944533 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:58:32.948936 systemd-logind[1512]: Removed session 9. Jan 23 00:58:40.877561 systemd[1]: Created slice kubepods-besteffort-pod9e053371_6345_4a36_9326_18a2b73193fc.slice - libcontainer container kubepods-besteffort-pod9e053371_6345_4a36_9326_18a2b73193fc.slice. Jan 23 00:58:40.948540 kubelet[2813]: I0123 00:58:40.948459 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e053371-6345-4a36-9326-18a2b73193fc-tigera-ca-bundle\") pod \"calico-typha-78f758cbfb-9pl5q\" (UID: \"9e053371-6345-4a36-9326-18a2b73193fc\") " pod="calico-system/calico-typha-78f758cbfb-9pl5q" Jan 23 00:58:40.948540 kubelet[2813]: I0123 00:58:40.948522 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9e053371-6345-4a36-9326-18a2b73193fc-typha-certs\") pod \"calico-typha-78f758cbfb-9pl5q\" (UID: \"9e053371-6345-4a36-9326-18a2b73193fc\") " pod="calico-system/calico-typha-78f758cbfb-9pl5q" Jan 23 00:58:40.949105 kubelet[2813]: I0123 00:58:40.948554 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ldv\" (UniqueName: \"kubernetes.io/projected/9e053371-6345-4a36-9326-18a2b73193fc-kube-api-access-j6ldv\") pod \"calico-typha-78f758cbfb-9pl5q\" (UID: \"9e053371-6345-4a36-9326-18a2b73193fc\") " pod="calico-system/calico-typha-78f758cbfb-9pl5q" Jan 23 00:58:41.151840 systemd[1]: Created slice kubepods-besteffort-pod9cc3800e_243c_43c0_91ec_2ad76abc0835.slice - libcontainer container kubepods-besteffort-pod9cc3800e_243c_43c0_91ec_2ad76abc0835.slice. Jan 23 00:58:41.185885 containerd[1536]: time="2026-01-23T00:58:41.185829918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f758cbfb-9pl5q,Uid:9e053371-6345-4a36-9326-18a2b73193fc,Namespace:calico-system,Attempt:0,}" Jan 23 00:58:41.213935 containerd[1536]: time="2026-01-23T00:58:41.213876345Z" level=info msg="connecting to shim b37d1a473ca548adbd3aac09c174d4c8d9fc426102b7de88bab428f806bfa22c" address="unix:///run/containerd/s/4c9e73fff32d5efc4219dddbf2cb8220c8059e14d89e9f6e148d9533b1d69d8c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:41.250866 kubelet[2813]: I0123 00:58:41.250648 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9cc3800e-243c-43c0-91ec-2ad76abc0835-node-certs\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.250866 kubelet[2813]: I0123 00:58:41.250701 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-cni-bin-dir\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.250866 kubelet[2813]: I0123 00:58:41.250732 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-flexvol-driver-host\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.250866 kubelet[2813]: I0123 00:58:41.250766 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cc3800e-243c-43c0-91ec-2ad76abc0835-tigera-ca-bundle\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.250866 kubelet[2813]: I0123 00:58:41.250792 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-var-run-calico\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251367 kubelet[2813]: I0123 00:58:41.250815 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-var-lib-calico\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251367 kubelet[2813]: I0123 00:58:41.250847 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-cni-net-dir\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251367 kubelet[2813]: I0123 00:58:41.250872 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g669w\" (UniqueName: \"kubernetes.io/projected/9cc3800e-243c-43c0-91ec-2ad76abc0835-kube-api-access-g669w\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251367 kubelet[2813]: I0123 00:58:41.250901 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-cni-log-dir\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251367 kubelet[2813]: I0123 00:58:41.250923 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-lib-modules\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251784 kubelet[2813]: I0123 00:58:41.250961 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-policysync\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.251784 kubelet[2813]: I0123 00:58:41.250988 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc3800e-243c-43c0-91ec-2ad76abc0835-xtables-lock\") pod \"calico-node-8p6pq\" (UID: \"9cc3800e-243c-43c0-91ec-2ad76abc0835\") " pod="calico-system/calico-node-8p6pq" Jan 23 00:58:41.259507 systemd[1]: Started cri-containerd-b37d1a473ca548adbd3aac09c174d4c8d9fc426102b7de88bab428f806bfa22c.scope - libcontainer container b37d1a473ca548adbd3aac09c174d4c8d9fc426102b7de88bab428f806bfa22c. Jan 23 00:58:41.359214 kubelet[2813]: E0123 00:58:41.359151 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.359214 kubelet[2813]: W0123 00:58:41.359183 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.359214 kubelet[2813]: E0123 00:58:41.359210 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.360605 kubelet[2813]: E0123 00:58:41.360573 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.360605 kubelet[2813]: W0123 00:58:41.360600 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.360801 kubelet[2813]: E0123 00:58:41.360621 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.362586 kubelet[2813]: E0123 00:58:41.362537 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.363060 kubelet[2813]: W0123 00:58:41.362716 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.363060 kubelet[2813]: E0123 00:58:41.362743 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.364201 kubelet[2813]: E0123 00:58:41.364096 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.364201 kubelet[2813]: W0123 00:58:41.364120 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.364201 kubelet[2813]: E0123 00:58:41.364137 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.365730 kubelet[2813]: E0123 00:58:41.365676 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.365730 kubelet[2813]: W0123 00:58:41.365699 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.365730 kubelet[2813]: E0123 00:58:41.365717 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.367835 containerd[1536]: time="2026-01-23T00:58:41.367675247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78f758cbfb-9pl5q,Uid:9e053371-6345-4a36-9326-18a2b73193fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"b37d1a473ca548adbd3aac09c174d4c8d9fc426102b7de88bab428f806bfa22c\"" Jan 23 00:58:41.368188 kubelet[2813]: E0123 00:58:41.368016 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.368188 kubelet[2813]: W0123 00:58:41.368040 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.368188 kubelet[2813]: E0123 00:58:41.368058 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.369116 kubelet[2813]: E0123 00:58:41.369012 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:41.371637 kubelet[2813]: E0123 00:58:41.371609 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.371637 kubelet[2813]: W0123 00:58:41.371633 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.371795 kubelet[2813]: E0123 00:58:41.371653 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.377132 kubelet[2813]: E0123 00:58:41.377106 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.377132 kubelet[2813]: W0123 00:58:41.377130 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.378655 kubelet[2813]: E0123 00:58:41.377148 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.381299 kubelet[2813]: E0123 00:58:41.380543 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.381299 kubelet[2813]: W0123 00:58:41.380562 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.381299 kubelet[2813]: E0123 00:58:41.380589 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.381299 kubelet[2813]: E0123 00:58:41.380926 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.381299 kubelet[2813]: W0123 00:58:41.380941 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.381299 kubelet[2813]: E0123 00:58:41.380958 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.388384 kubelet[2813]: E0123 00:58:41.388356 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.388384 kubelet[2813]: W0123 00:58:41.388382 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.389220 kubelet[2813]: E0123 00:58:41.388402 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.389672 kubelet[2813]: E0123 00:58:41.389518 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.389672 kubelet[2813]: W0123 00:58:41.389537 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.389672 kubelet[2813]: E0123 00:58:41.389554 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.391377 kubelet[2813]: E0123 00:58:41.390422 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.391695 kubelet[2813]: W0123 00:58:41.391479 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.391695 kubelet[2813]: E0123 00:58:41.391503 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.391810 containerd[1536]: time="2026-01-23T00:58:41.391513215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 00:58:41.392081 kubelet[2813]: E0123 00:58:41.392025 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.392196 kubelet[2813]: W0123 00:58:41.392177 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.392369 kubelet[2813]: E0123 00:58:41.392302 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.394253 kubelet[2813]: E0123 00:58:41.393595 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.394253 kubelet[2813]: W0123 00:58:41.393614 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.394693 kubelet[2813]: E0123 00:58:41.394479 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.396297 kubelet[2813]: E0123 00:58:41.394902 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.396437 kubelet[2813]: W0123 00:58:41.396416 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.396593 kubelet[2813]: E0123 00:58:41.396575 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.398198 kubelet[2813]: E0123 00:58:41.396975 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.398198 kubelet[2813]: W0123 00:58:41.396992 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.398198 kubelet[2813]: E0123 00:58:41.397007 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.400207 kubelet[2813]: E0123 00:58:41.400184 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.400402 kubelet[2813]: W0123 00:58:41.400384 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.400737 kubelet[2813]: E0123 00:58:41.400670 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.405359 kubelet[2813]: E0123 00:58:41.403664 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.405512 kubelet[2813]: W0123 00:58:41.405482 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.405641 kubelet[2813]: E0123 00:58:41.405623 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.406839 kubelet[2813]: E0123 00:58:41.406820 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.406970 kubelet[2813]: W0123 00:58:41.406945 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.407071 kubelet[2813]: E0123 00:58:41.407056 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.407640 kubelet[2813]: E0123 00:58:41.407482 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.407640 kubelet[2813]: W0123 00:58:41.407508 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.407640 kubelet[2813]: E0123 00:58:41.407525 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.408186 kubelet[2813]: E0123 00:58:41.408167 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.408334 kubelet[2813]: W0123 00:58:41.408313 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.408452 kubelet[2813]: E0123 00:58:41.408432 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.409075 kubelet[2813]: E0123 00:58:41.409007 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.409075 kubelet[2813]: W0123 00:58:41.409026 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.409419 kubelet[2813]: E0123 00:58:41.409043 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.410734 kubelet[2813]: E0123 00:58:41.410715 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.410891 kubelet[2813]: W0123 00:58:41.410851 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.411070 kubelet[2813]: E0123 00:58:41.410974 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.411749 kubelet[2813]: E0123 00:58:41.411731 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.411960 kubelet[2813]: W0123 00:58:41.411881 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.411960 kubelet[2813]: E0123 00:58:41.411909 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.412830 kubelet[2813]: E0123 00:58:41.412803 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.412962 kubelet[2813]: W0123 00:58:41.412942 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.413154 kubelet[2813]: E0123 00:58:41.413034 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.414331 kubelet[2813]: E0123 00:58:41.413699 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.414331 kubelet[2813]: W0123 00:58:41.413717 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.414331 kubelet[2813]: E0123 00:58:41.413735 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.415662 kubelet[2813]: E0123 00:58:41.415635 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.415778 kubelet[2813]: W0123 00:58:41.415763 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.415861 kubelet[2813]: E0123 00:58:41.415847 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.416653 kubelet[2813]: E0123 00:58:41.416633 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.416773 kubelet[2813]: W0123 00:58:41.416757 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.416877 kubelet[2813]: E0123 00:58:41.416861 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.417345 kubelet[2813]: E0123 00:58:41.417325 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.417484 kubelet[2813]: W0123 00:58:41.417455 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.417596 kubelet[2813]: E0123 00:58:41.417578 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.438311 kubelet[2813]: E0123 00:58:41.438151 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.438311 kubelet[2813]: W0123 00:58:41.438176 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.438311 kubelet[2813]: E0123 00:58:41.438198 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.438949 kubelet[2813]: E0123 00:58:41.438876 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.438949 kubelet[2813]: W0123 00:58:41.438895 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.438949 kubelet[2813]: E0123 00:58:41.438913 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.439529 kubelet[2813]: E0123 00:58:41.439512 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.439709 kubelet[2813]: W0123 00:58:41.439688 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.439915 kubelet[2813]: E0123 00:58:41.439783 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.440519 kubelet[2813]: E0123 00:58:41.440491 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.440734 kubelet[2813]: W0123 00:58:41.440611 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.440734 kubelet[2813]: E0123 00:58:41.440632 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.441441 kubelet[2813]: E0123 00:58:41.441422 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.441552 kubelet[2813]: W0123 00:58:41.441539 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.441615 kubelet[2813]: E0123 00:58:41.441605 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.442018 kubelet[2813]: E0123 00:58:41.442000 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.442144 kubelet[2813]: W0123 00:58:41.442083 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.442144 kubelet[2813]: E0123 00:58:41.442099 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.442540 kubelet[2813]: E0123 00:58:41.442525 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.442764 kubelet[2813]: W0123 00:58:41.442673 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.442764 kubelet[2813]: E0123 00:58:41.442699 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.443336 kubelet[2813]: E0123 00:58:41.443178 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.443336 kubelet[2813]: W0123 00:58:41.443197 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.443336 kubelet[2813]: E0123 00:58:41.443231 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.444432 kubelet[2813]: E0123 00:58:41.444410 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.444975 kubelet[2813]: W0123 00:58:41.444934 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.445388 kubelet[2813]: E0123 00:58:41.445078 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.445949 kubelet[2813]: E0123 00:58:41.445910 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.446535 kubelet[2813]: W0123 00:58:41.446191 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.446535 kubelet[2813]: E0123 00:58:41.446217 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.447167 kubelet[2813]: E0123 00:58:41.447099 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.447167 kubelet[2813]: W0123 00:58:41.447118 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.447644 kubelet[2813]: E0123 00:58:41.447135 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.448114 kubelet[2813]: E0123 00:58:41.447988 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.448114 kubelet[2813]: W0123 00:58:41.448006 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.448114 kubelet[2813]: E0123 00:58:41.448033 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.449490 kubelet[2813]: E0123 00:58:41.448794 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.449490 kubelet[2813]: W0123 00:58:41.449316 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.449490 kubelet[2813]: E0123 00:58:41.449335 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.450294 kubelet[2813]: E0123 00:58:41.450104 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.450294 kubelet[2813]: W0123 00:58:41.450118 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.450294 kubelet[2813]: E0123 00:58:41.450131 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.451421 kubelet[2813]: E0123 00:58:41.451400 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.451706 kubelet[2813]: W0123 00:58:41.451557 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.451706 kubelet[2813]: E0123 00:58:41.451584 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.452300 kubelet[2813]: E0123 00:58:41.452023 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.452300 kubelet[2813]: W0123 00:58:41.452060 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.452300 kubelet[2813]: E0123 00:58:41.452077 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.454124 kubelet[2813]: E0123 00:58:41.453761 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.454124 kubelet[2813]: W0123 00:58:41.453783 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.454124 kubelet[2813]: E0123 00:58:41.453800 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.455288 kubelet[2813]: E0123 00:58:41.455234 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.456242 kubelet[2813]: W0123 00:58:41.455896 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.456242 kubelet[2813]: E0123 00:58:41.455926 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.457069 kubelet[2813]: E0123 00:58:41.457046 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.457733 kubelet[2813]: W0123 00:58:41.457389 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.457733 kubelet[2813]: E0123 00:58:41.457417 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.458846 kubelet[2813]: E0123 00:58:41.458603 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.458846 kubelet[2813]: W0123 00:58:41.458622 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.458846 kubelet[2813]: E0123 00:58:41.458638 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.460147 kubelet[2813]: E0123 00:58:41.459747 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.460147 kubelet[2813]: W0123 00:58:41.459984 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.460147 kubelet[2813]: E0123 00:58:41.460008 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.462327 kubelet[2813]: E0123 00:58:41.462280 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.462327 kubelet[2813]: W0123 00:58:41.462305 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.462327 kubelet[2813]: E0123 00:58:41.462324 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.462887 kubelet[2813]: I0123 00:58:41.462355 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e-registration-dir\") pod \"csi-node-driver-cvsx8\" (UID: \"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e\") " pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:41.463173 kubelet[2813]: E0123 00:58:41.462659 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.463173 kubelet[2813]: W0123 00:58:41.463016 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.463173 kubelet[2813]: E0123 00:58:41.463037 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.463173 kubelet[2813]: I0123 00:58:41.463064 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e-varrun\") pod \"csi-node-driver-cvsx8\" (UID: \"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e\") " pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:41.465046 containerd[1536]: time="2026-01-23T00:58:41.464326910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8p6pq,Uid:9cc3800e-243c-43c0-91ec-2ad76abc0835,Namespace:calico-system,Attempt:0,}" Jan 23 00:58:41.465992 kubelet[2813]: E0123 00:58:41.465970 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.466196 kubelet[2813]: W0123 00:58:41.466172 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.466616 kubelet[2813]: E0123 00:58:41.466357 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.466850 kubelet[2813]: I0123 00:58:41.466824 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e-socket-dir\") pod \"csi-node-driver-cvsx8\" (UID: \"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e\") " pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:41.469580 kubelet[2813]: E0123 00:58:41.469369 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.469580 kubelet[2813]: W0123 00:58:41.469391 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.469580 kubelet[2813]: E0123 00:58:41.469409 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.469580 kubelet[2813]: I0123 00:58:41.469472 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb5jl\" (UniqueName: \"kubernetes.io/projected/0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e-kube-api-access-pb5jl\") pod \"csi-node-driver-cvsx8\" (UID: \"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e\") " pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:41.470210 kubelet[2813]: E0123 00:58:41.470191 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.470459 kubelet[2813]: W0123 00:58:41.470336 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.470459 kubelet[2813]: E0123 00:58:41.470360 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.470950 kubelet[2813]: E0123 00:58:41.470894 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.470950 kubelet[2813]: W0123 00:58:41.470913 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.470950 kubelet[2813]: E0123 00:58:41.470930 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.471704 kubelet[2813]: E0123 00:58:41.471650 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.471704 kubelet[2813]: W0123 00:58:41.471668 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.471704 kubelet[2813]: E0123 00:58:41.471684 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.472682 kubelet[2813]: E0123 00:58:41.472626 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.472929 kubelet[2813]: W0123 00:58:41.472792 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.472929 kubelet[2813]: E0123 00:58:41.472815 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.473294 kubelet[2813]: I0123 00:58:41.473117 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e-kubelet-dir\") pod \"csi-node-driver-cvsx8\" (UID: \"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e\") " pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:41.474692 kubelet[2813]: E0123 00:58:41.473812 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.474692 kubelet[2813]: W0123 00:58:41.474419 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.474692 kubelet[2813]: E0123 00:58:41.474444 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.475068 kubelet[2813]: E0123 00:58:41.475050 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.475208 kubelet[2813]: W0123 00:58:41.475165 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.475208 kubelet[2813]: E0123 00:58:41.475188 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.475808 kubelet[2813]: E0123 00:58:41.475791 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.475930 kubelet[2813]: W0123 00:58:41.475913 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.476216 kubelet[2813]: E0123 00:58:41.476011 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.476706 kubelet[2813]: E0123 00:58:41.476648 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.476706 kubelet[2813]: W0123 00:58:41.476667 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.476706 kubelet[2813]: E0123 00:58:41.476684 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.478198 kubelet[2813]: E0123 00:58:41.477894 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.478198 kubelet[2813]: W0123 00:58:41.477914 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.478198 kubelet[2813]: E0123 00:58:41.477933 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.478678 kubelet[2813]: E0123 00:58:41.478538 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.478678 kubelet[2813]: W0123 00:58:41.478586 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.478678 kubelet[2813]: E0123 00:58:41.478602 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.479740 kubelet[2813]: E0123 00:58:41.479616 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.479740 kubelet[2813]: W0123 00:58:41.479634 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.479740 kubelet[2813]: E0123 00:58:41.479651 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.505982 containerd[1536]: time="2026-01-23T00:58:41.505469184Z" level=info msg="connecting to shim 2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d" address="unix:///run/containerd/s/419ade9c8d34e7dca07d01200a05439a2bb52205bc8530b7c88dc0ade158f504" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:41.562507 systemd[1]: Started cri-containerd-2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d.scope - libcontainer container 2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d. Jan 23 00:58:41.581391 kubelet[2813]: E0123 00:58:41.581330 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.581847 kubelet[2813]: W0123 00:58:41.581493 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.581847 kubelet[2813]: E0123 00:58:41.581530 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.582668 kubelet[2813]: E0123 00:58:41.582504 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.582668 kubelet[2813]: W0123 00:58:41.582528 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.582668 kubelet[2813]: E0123 00:58:41.582549 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.583096 kubelet[2813]: E0123 00:58:41.583029 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.583096 kubelet[2813]: W0123 00:58:41.583047 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.583096 kubelet[2813]: E0123 00:58:41.583064 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.583820 kubelet[2813]: E0123 00:58:41.583757 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.583820 kubelet[2813]: W0123 00:58:41.583775 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.583820 kubelet[2813]: E0123 00:58:41.583791 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.584652 kubelet[2813]: E0123 00:58:41.584591 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.584652 kubelet[2813]: W0123 00:58:41.584610 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.584652 kubelet[2813]: E0123 00:58:41.584627 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.585256 kubelet[2813]: E0123 00:58:41.585232 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.585490 kubelet[2813]: W0123 00:58:41.585347 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.585490 kubelet[2813]: E0123 00:58:41.585366 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.586201 kubelet[2813]: E0123 00:58:41.586095 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.586201 kubelet[2813]: W0123 00:58:41.586114 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.586613 kubelet[2813]: E0123 00:58:41.586136 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.587150 kubelet[2813]: E0123 00:58:41.587099 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.587433 kubelet[2813]: W0123 00:58:41.587117 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.587433 kubelet[2813]: E0123 00:58:41.587315 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.588380 kubelet[2813]: E0123 00:58:41.588337 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.588588 kubelet[2813]: W0123 00:58:41.588479 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.588588 kubelet[2813]: E0123 00:58:41.588503 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.589473 kubelet[2813]: E0123 00:58:41.589417 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.589851 kubelet[2813]: W0123 00:58:41.589576 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.589851 kubelet[2813]: E0123 00:58:41.589622 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.590716 kubelet[2813]: E0123 00:58:41.590697 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.591507 kubelet[2813]: W0123 00:58:41.591103 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.591507 kubelet[2813]: E0123 00:58:41.591139 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.592470 kubelet[2813]: E0123 00:58:41.592404 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.592470 kubelet[2813]: W0123 00:58:41.592427 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.592470 kubelet[2813]: E0123 00:58:41.592447 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.594081 kubelet[2813]: E0123 00:58:41.593741 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.594081 kubelet[2813]: W0123 00:58:41.593821 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.594764 kubelet[2813]: E0123 00:58:41.594289 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.596162 kubelet[2813]: E0123 00:58:41.596078 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.596687 kubelet[2813]: W0123 00:58:41.596223 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.596687 kubelet[2813]: E0123 00:58:41.596246 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.598496 kubelet[2813]: E0123 00:58:41.598462 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.598861 kubelet[2813]: W0123 00:58:41.598612 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.598861 kubelet[2813]: E0123 00:58:41.598640 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.600159 kubelet[2813]: E0123 00:58:41.599837 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.600159 kubelet[2813]: W0123 00:58:41.599857 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.600159 kubelet[2813]: E0123 00:58:41.599876 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.600783 kubelet[2813]: E0123 00:58:41.600737 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.601122 kubelet[2813]: W0123 00:58:41.600945 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.601122 kubelet[2813]: E0123 00:58:41.600984 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.601536 kubelet[2813]: E0123 00:58:41.601515 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.601801 kubelet[2813]: W0123 00:58:41.601657 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.601801 kubelet[2813]: E0123 00:58:41.601684 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.602152 kubelet[2813]: E0123 00:58:41.602121 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.602345 kubelet[2813]: W0123 00:58:41.602238 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.602345 kubelet[2813]: E0123 00:58:41.602283 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.603574 kubelet[2813]: E0123 00:58:41.603527 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.603971 kubelet[2813]: W0123 00:58:41.603693 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.603971 kubelet[2813]: E0123 00:58:41.603721 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.604897 kubelet[2813]: E0123 00:58:41.604581 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.604897 kubelet[2813]: W0123 00:58:41.604605 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.604897 kubelet[2813]: E0123 00:58:41.604625 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.606005 kubelet[2813]: E0123 00:58:41.605488 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.606005 kubelet[2813]: W0123 00:58:41.605510 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.606005 kubelet[2813]: E0123 00:58:41.605530 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.606736 kubelet[2813]: E0123 00:58:41.606698 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.606890 kubelet[2813]: W0123 00:58:41.606868 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.607122 kubelet[2813]: E0123 00:58:41.607046 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.607838 kubelet[2813]: E0123 00:58:41.607717 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.607838 kubelet[2813]: W0123 00:58:41.607755 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.607838 kubelet[2813]: E0123 00:58:41.607775 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.608830 kubelet[2813]: E0123 00:58:41.608787 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.609034 kubelet[2813]: W0123 00:58:41.608922 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.609034 kubelet[2813]: E0123 00:58:41.608949 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.623717 kubelet[2813]: E0123 00:58:41.623607 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:41.623717 kubelet[2813]: W0123 00:58:41.623634 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:41.623717 kubelet[2813]: E0123 00:58:41.623663 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:41.625694 containerd[1536]: time="2026-01-23T00:58:41.625637501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8p6pq,Uid:9cc3800e-243c-43c0-91ec-2ad76abc0835,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\"" Jan 23 00:58:42.427150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823400004.mount: Deactivated successfully. Jan 23 00:58:42.914419 kubelet[2813]: E0123 00:58:42.914338 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:44.150078 containerd[1536]: time="2026-01-23T00:58:44.150004210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:44.151644 containerd[1536]: time="2026-01-23T00:58:44.151575574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 00:58:44.153061 containerd[1536]: time="2026-01-23T00:58:44.152907666Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:44.155569 containerd[1536]: time="2026-01-23T00:58:44.155508209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:44.156643 containerd[1536]: time="2026-01-23T00:58:44.156305871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.764748068s" Jan 23 00:58:44.156643 containerd[1536]: time="2026-01-23T00:58:44.156352832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 00:58:44.159254 containerd[1536]: time="2026-01-23T00:58:44.158022754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 00:58:44.188206 containerd[1536]: time="2026-01-23T00:58:44.188161245Z" level=info msg="CreateContainer within sandbox \"b37d1a473ca548adbd3aac09c174d4c8d9fc426102b7de88bab428f806bfa22c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 00:58:44.199507 containerd[1536]: time="2026-01-23T00:58:44.199452847Z" level=info msg="Container 94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:44.212373 containerd[1536]: time="2026-01-23T00:58:44.212313127Z" level=info msg="CreateContainer within sandbox \"b37d1a473ca548adbd3aac09c174d4c8d9fc426102b7de88bab428f806bfa22c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4\"" Jan 23 00:58:44.214296 containerd[1536]: time="2026-01-23T00:58:44.213350096Z" level=info msg="StartContainer for \"94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4\"" Jan 23 00:58:44.215606 containerd[1536]: time="2026-01-23T00:58:44.215570803Z" level=info msg="connecting to shim 94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4" address="unix:///run/containerd/s/4c9e73fff32d5efc4219dddbf2cb8220c8059e14d89e9f6e148d9533b1d69d8c" protocol=ttrpc version=3 Jan 23 00:58:44.256466 systemd[1]: Started cri-containerd-94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4.scope - libcontainer container 94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4. Jan 23 00:58:44.341429 containerd[1536]: time="2026-01-23T00:58:44.341370637Z" level=info msg="StartContainer for \"94e57ade4a67699af3952145c541aae706ac3e9eb3caad08ee668d59e6a613d4\" returns successfully" Jan 23 00:58:44.914158 kubelet[2813]: E0123 00:58:44.914086 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:45.109031 kubelet[2813]: I0123 00:58:45.108211 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78f758cbfb-9pl5q" podStartSLOduration=2.340639489 podStartE2EDuration="5.108185725s" podCreationTimestamp="2026-01-23 00:58:40 +0000 UTC" firstStartedPulling="2026-01-23 00:58:41.390088238 +0000 UTC m=+24.687774917" lastFinishedPulling="2026-01-23 00:58:44.157634459 +0000 UTC m=+27.455321153" observedRunningTime="2026-01-23 00:58:45.10800868 +0000 UTC m=+28.405695408" watchObservedRunningTime="2026-01-23 00:58:45.108185725 +0000 UTC m=+28.405872426" Jan 23 00:58:45.188658 kubelet[2813]: E0123 00:58:45.188532 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.189253 kubelet[2813]: W0123 00:58:45.188860 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.189253 kubelet[2813]: E0123 00:58:45.189065 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.189869 kubelet[2813]: E0123 00:58:45.189768 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.189869 kubelet[2813]: W0123 00:58:45.189817 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.190152 kubelet[2813]: E0123 00:58:45.189848 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.190828 kubelet[2813]: E0123 00:58:45.190769 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.190828 kubelet[2813]: W0123 00:58:45.190787 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.191137 kubelet[2813]: E0123 00:58:45.190807 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.191551 kubelet[2813]: E0123 00:58:45.191449 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.191551 kubelet[2813]: W0123 00:58:45.191478 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.191551 kubelet[2813]: E0123 00:58:45.191498 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.192284 kubelet[2813]: E0123 00:58:45.192236 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.192507 kubelet[2813]: W0123 00:58:45.192254 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.192507 kubelet[2813]: E0123 00:58:45.192428 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.193036 kubelet[2813]: E0123 00:58:45.192989 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.193036 kubelet[2813]: W0123 00:58:45.193008 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.193387 kubelet[2813]: E0123 00:58:45.193025 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.193837 kubelet[2813]: E0123 00:58:45.193821 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.194034 kubelet[2813]: W0123 00:58:45.193947 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.194034 kubelet[2813]: E0123 00:58:45.193988 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.194695 kubelet[2813]: E0123 00:58:45.194549 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.194695 kubelet[2813]: W0123 00:58:45.194566 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.194695 kubelet[2813]: E0123 00:58:45.194582 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.195209 kubelet[2813]: E0123 00:58:45.195100 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.195209 kubelet[2813]: W0123 00:58:45.195117 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.195209 kubelet[2813]: E0123 00:58:45.195134 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.195747 kubelet[2813]: E0123 00:58:45.195657 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.195747 kubelet[2813]: W0123 00:58:45.195673 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.195747 kubelet[2813]: E0123 00:58:45.195688 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.196315 kubelet[2813]: E0123 00:58:45.196250 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.196516 kubelet[2813]: W0123 00:58:45.196428 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.196516 kubelet[2813]: E0123 00:58:45.196451 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.196976 kubelet[2813]: E0123 00:58:45.196923 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.196976 kubelet[2813]: W0123 00:58:45.196940 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.197425 kubelet[2813]: E0123 00:58:45.196957 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.197868 kubelet[2813]: E0123 00:58:45.197830 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.198062 kubelet[2813]: W0123 00:58:45.197968 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.198062 kubelet[2813]: E0123 00:58:45.197995 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.198696 kubelet[2813]: E0123 00:58:45.198562 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.198696 kubelet[2813]: W0123 00:58:45.198579 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.198696 kubelet[2813]: E0123 00:58:45.198594 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.199180 kubelet[2813]: E0123 00:58:45.199119 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.199180 kubelet[2813]: W0123 00:58:45.199137 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.199455 kubelet[2813]: E0123 00:58:45.199372 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.225018 kubelet[2813]: E0123 00:58:45.224909 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.225018 kubelet[2813]: W0123 00:58:45.224943 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.225018 kubelet[2813]: E0123 00:58:45.224971 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.226284 kubelet[2813]: E0123 00:58:45.226238 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.226284 kubelet[2813]: W0123 00:58:45.226277 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.226483 kubelet[2813]: E0123 00:58:45.226299 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.226834 kubelet[2813]: E0123 00:58:45.226812 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.226930 kubelet[2813]: W0123 00:58:45.226834 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.226930 kubelet[2813]: E0123 00:58:45.226853 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.227770 kubelet[2813]: E0123 00:58:45.227601 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.227770 kubelet[2813]: W0123 00:58:45.227623 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.227770 kubelet[2813]: E0123 00:58:45.227647 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.228548 kubelet[2813]: E0123 00:58:45.228480 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.228548 kubelet[2813]: W0123 00:58:45.228549 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.229168 kubelet[2813]: E0123 00:58:45.228567 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.229342 kubelet[2813]: E0123 00:58:45.229322 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.229425 kubelet[2813]: W0123 00:58:45.229344 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.229425 kubelet[2813]: E0123 00:58:45.229360 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.229781 kubelet[2813]: E0123 00:58:45.229758 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.229864 kubelet[2813]: W0123 00:58:45.229805 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.229864 kubelet[2813]: E0123 00:58:45.229823 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.230485 kubelet[2813]: E0123 00:58:45.230424 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.230485 kubelet[2813]: W0123 00:58:45.230441 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.230485 kubelet[2813]: E0123 00:58:45.230457 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.231028 kubelet[2813]: E0123 00:58:45.231006 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.231028 kubelet[2813]: W0123 00:58:45.231027 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.231171 kubelet[2813]: E0123 00:58:45.231045 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.232308 kubelet[2813]: E0123 00:58:45.231717 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.232308 kubelet[2813]: W0123 00:58:45.231736 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.232308 kubelet[2813]: E0123 00:58:45.231753 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.232308 kubelet[2813]: E0123 00:58:45.232089 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.232308 kubelet[2813]: W0123 00:58:45.232102 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.232308 kubelet[2813]: E0123 00:58:45.232116 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.232749 kubelet[2813]: E0123 00:58:45.232714 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.232749 kubelet[2813]: W0123 00:58:45.232736 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.232885 kubelet[2813]: E0123 00:58:45.232752 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.233373 kubelet[2813]: E0123 00:58:45.233350 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.233373 kubelet[2813]: W0123 00:58:45.233372 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.233523 kubelet[2813]: E0123 00:58:45.233389 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.234585 kubelet[2813]: E0123 00:58:45.234565 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.234816 kubelet[2813]: W0123 00:58:45.234681 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.234816 kubelet[2813]: E0123 00:58:45.234704 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.235792 kubelet[2813]: E0123 00:58:45.235744 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.235792 kubelet[2813]: W0123 00:58:45.235763 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.236826 kubelet[2813]: E0123 00:58:45.236592 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.237422 kubelet[2813]: E0123 00:58:45.237396 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.238247 kubelet[2813]: W0123 00:58:45.237905 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.238247 kubelet[2813]: E0123 00:58:45.237933 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.238549 kubelet[2813]: E0123 00:58:45.238531 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.238682 kubelet[2813]: W0123 00:58:45.238664 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.238878 kubelet[2813]: E0123 00:58:45.238769 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.239460 kubelet[2813]: E0123 00:58:45.239443 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:58:45.239716 kubelet[2813]: W0123 00:58:45.239578 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:58:45.239716 kubelet[2813]: E0123 00:58:45.239600 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:58:45.283292 containerd[1536]: time="2026-01-23T00:58:45.283222493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:45.286289 containerd[1536]: time="2026-01-23T00:58:45.285574039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 00:58:45.286429 containerd[1536]: time="2026-01-23T00:58:45.286405918Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:45.290514 containerd[1536]: time="2026-01-23T00:58:45.290456201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:45.291940 containerd[1536]: time="2026-01-23T00:58:45.291691466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.133630901s" Jan 23 00:58:45.291940 containerd[1536]: time="2026-01-23T00:58:45.291809544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 00:58:45.299489 containerd[1536]: time="2026-01-23T00:58:45.298809204Z" level=info msg="CreateContainer within sandbox \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 00:58:45.320419 containerd[1536]: time="2026-01-23T00:58:45.320350102Z" level=info msg="Container 1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:45.339080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047481172.mount: Deactivated successfully. Jan 23 00:58:45.347427 containerd[1536]: time="2026-01-23T00:58:45.347372790Z" level=info msg="CreateContainer within sandbox \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989\"" Jan 23 00:58:45.349320 containerd[1536]: time="2026-01-23T00:58:45.349257095Z" level=info msg="StartContainer for \"1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989\"" Jan 23 00:58:45.353702 containerd[1536]: time="2026-01-23T00:58:45.353666095Z" level=info msg="connecting to shim 1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989" address="unix:///run/containerd/s/419ade9c8d34e7dca07d01200a05439a2bb52205bc8530b7c88dc0ade158f504" protocol=ttrpc version=3 Jan 23 00:58:45.389482 systemd[1]: Started cri-containerd-1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989.scope - libcontainer container 1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989. Jan 23 00:58:45.482596 containerd[1536]: time="2026-01-23T00:58:45.480115499Z" level=info msg="StartContainer for \"1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989\" returns successfully" Jan 23 00:58:45.503419 systemd[1]: cri-containerd-1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989.scope: Deactivated successfully. Jan 23 00:58:45.509206 containerd[1536]: time="2026-01-23T00:58:45.509158585Z" level=info msg="received container exit event container_id:\"1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989\" id:\"1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989\" pid:3541 exited_at:{seconds:1769129925 nanos:508424929}" Jan 23 00:58:45.544998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1995669eca0867ffc94a49745a906a43b6e8758ca2bf99f9fe08cf5b9e259989-rootfs.mount: Deactivated successfully. Jan 23 00:58:46.914066 kubelet[2813]: E0123 00:58:46.913553 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:47.100792 containerd[1536]: time="2026-01-23T00:58:47.100546828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 00:58:48.914297 kubelet[2813]: E0123 00:58:48.913500 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:50.454217 containerd[1536]: time="2026-01-23T00:58:50.454137621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:50.456384 containerd[1536]: time="2026-01-23T00:58:50.456110061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 00:58:50.459431 containerd[1536]: time="2026-01-23T00:58:50.459389371Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:50.465617 containerd[1536]: time="2026-01-23T00:58:50.465580068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:50.468706 containerd[1536]: time="2026-01-23T00:58:50.468201658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.367603903s" Jan 23 00:58:50.468706 containerd[1536]: time="2026-01-23T00:58:50.468251535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 00:58:50.477133 containerd[1536]: time="2026-01-23T00:58:50.477072406Z" level=info msg="CreateContainer within sandbox \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 00:58:50.494500 containerd[1536]: time="2026-01-23T00:58:50.494445553Z" level=info msg="Container 2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:50.508068 containerd[1536]: time="2026-01-23T00:58:50.508004029Z" level=info msg="CreateContainer within sandbox \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a\"" Jan 23 00:58:50.508879 containerd[1536]: time="2026-01-23T00:58:50.508821309Z" level=info msg="StartContainer for \"2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a\"" Jan 23 00:58:50.511403 containerd[1536]: time="2026-01-23T00:58:50.511309556Z" level=info msg="connecting to shim 2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a" address="unix:///run/containerd/s/419ade9c8d34e7dca07d01200a05439a2bb52205bc8530b7c88dc0ade158f504" protocol=ttrpc version=3 Jan 23 00:58:50.546490 systemd[1]: Started cri-containerd-2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a.scope - libcontainer container 2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a. Jan 23 00:58:50.647544 containerd[1536]: time="2026-01-23T00:58:50.647489635Z" level=info msg="StartContainer for \"2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a\" returns successfully" Jan 23 00:58:50.914308 kubelet[2813]: E0123 00:58:50.913305 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:51.697522 containerd[1536]: time="2026-01-23T00:58:51.697429239Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:58:51.705117 systemd[1]: cri-containerd-2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a.scope: Deactivated successfully. Jan 23 00:58:51.706118 systemd[1]: cri-containerd-2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a.scope: Consumed 657ms CPU time, 193.6M memory peak, 171.3M written to disk. Jan 23 00:58:51.708153 containerd[1536]: time="2026-01-23T00:58:51.707659891Z" level=info msg="received container exit event container_id:\"2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a\" id:\"2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a\" pid:3600 exited_at:{seconds:1769129931 nanos:706859018}" Jan 23 00:58:51.742775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cae0c6d4b0bbc377ca564c3af9813b98cf8bfb1f8744235542e120cf7c7f38a-rootfs.mount: Deactivated successfully. Jan 23 00:58:51.784194 kubelet[2813]: I0123 00:58:51.784154 2813 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 00:58:51.902831 systemd[1]: Created slice kubepods-besteffort-pod31c15f9f_ad6e_46e0_8a36_0d164e7c4eed.slice - libcontainer container kubepods-besteffort-pod31c15f9f_ad6e_46e0_8a36_0d164e7c4eed.slice. Jan 23 00:58:51.921514 systemd[1]: Created slice kubepods-besteffort-pod68595a98_b8d6_439d_8c23_9d5c1a8e3d45.slice - libcontainer container kubepods-besteffort-pod68595a98_b8d6_439d_8c23_9d5c1a8e3d45.slice. Jan 23 00:58:51.992699 kubelet[2813]: I0123 00:58:51.992083 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/68595a98-b8d6-439d-8c23-9d5c1a8e3d45-calico-apiserver-certs\") pod \"calico-apiserver-8fc7f6fd7-dhdhc\" (UID: \"68595a98-b8d6-439d-8c23-9d5c1a8e3d45\") " pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" Jan 23 00:58:51.992699 kubelet[2813]: I0123 00:58:51.992160 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnln\" (UniqueName: \"kubernetes.io/projected/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-kube-api-access-rmnln\") pod \"whisker-8d6bc8dd4-bt4gl\" (UID: \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\") " pod="calico-system/whisker-8d6bc8dd4-bt4gl" Jan 23 00:58:51.992699 kubelet[2813]: I0123 00:58:51.992191 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-ca-bundle\") pod \"whisker-8d6bc8dd4-bt4gl\" (UID: \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\") " pod="calico-system/whisker-8d6bc8dd4-bt4gl" Jan 23 00:58:51.992699 kubelet[2813]: I0123 00:58:51.992224 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-backend-key-pair\") pod \"whisker-8d6bc8dd4-bt4gl\" (UID: \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\") " pod="calico-system/whisker-8d6bc8dd4-bt4gl" Jan 23 00:58:51.992699 kubelet[2813]: I0123 00:58:51.992285 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxjp\" (UniqueName: \"kubernetes.io/projected/68595a98-b8d6-439d-8c23-9d5c1a8e3d45-kube-api-access-jnxjp\") pod \"calico-apiserver-8fc7f6fd7-dhdhc\" (UID: \"68595a98-b8d6-439d-8c23-9d5c1a8e3d45\") " pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" Jan 23 00:58:52.179997 systemd[1]: Created slice kubepods-burstable-pod8d4a98f1_53d3_4f88_92ce_8fea82a35989.slice - libcontainer container kubepods-burstable-pod8d4a98f1_53d3_4f88_92ce_8fea82a35989.slice. Jan 23 00:58:52.294427 kubelet[2813]: I0123 00:58:52.294358 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk65m\" (UniqueName: \"kubernetes.io/projected/8d4a98f1-53d3-4f88-92ce-8fea82a35989-kube-api-access-pk65m\") pod \"coredns-66bc5c9577-5mh2p\" (UID: \"8d4a98f1-53d3-4f88-92ce-8fea82a35989\") " pod="kube-system/coredns-66bc5c9577-5mh2p" Jan 23 00:58:52.294639 kubelet[2813]: I0123 00:58:52.294447 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d4a98f1-53d3-4f88-92ce-8fea82a35989-config-volume\") pod \"coredns-66bc5c9577-5mh2p\" (UID: \"8d4a98f1-53d3-4f88-92ce-8fea82a35989\") " pod="kube-system/coredns-66bc5c9577-5mh2p" Jan 23 00:58:52.304902 containerd[1536]: time="2026-01-23T00:58:52.304815442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8d6bc8dd4-bt4gl,Uid:31c15f9f-ad6e-46e0-8a36-0d164e7c4eed,Namespace:calico-system,Attempt:0,}" Jan 23 00:58:52.310127 containerd[1536]: time="2026-01-23T00:58:52.310042621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-dhdhc,Uid:68595a98-b8d6-439d-8c23-9d5c1a8e3d45,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:58:52.344213 systemd[1]: Created slice kubepods-burstable-pod1cf18dfd_56e7_4d93_81dc_7bc927aabc75.slice - libcontainer container kubepods-burstable-pod1cf18dfd_56e7_4d93_81dc_7bc927aabc75.slice. Jan 23 00:58:52.369846 systemd[1]: Created slice kubepods-besteffort-pod1226503f_d3f5_44b3_bde1_f270917649eb.slice - libcontainer container kubepods-besteffort-pod1226503f_d3f5_44b3_bde1_f270917649eb.slice. Jan 23 00:58:52.389043 systemd[1]: Created slice kubepods-besteffort-poda4c1f677_ec37_4544_9262_69c2ea18781d.slice - libcontainer container kubepods-besteffort-poda4c1f677_ec37_4544_9262_69c2ea18781d.slice. Jan 23 00:58:52.396301 kubelet[2813]: I0123 00:58:52.395630 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-cfwbw\" (UID: \"c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82\") " pod="calico-system/goldmane-7c778bb748-cfwbw" Jan 23 00:58:52.396301 kubelet[2813]: I0123 00:58:52.395686 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1226503f-d3f5-44b3-bde1-f270917649eb-calico-apiserver-certs\") pod \"calico-apiserver-8fc7f6fd7-2br74\" (UID: \"1226503f-d3f5-44b3-bde1-f270917649eb\") " pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" Jan 23 00:58:52.398291 kubelet[2813]: I0123 00:58:52.395719 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb92\" (UniqueName: \"kubernetes.io/projected/1cf18dfd-56e7-4d93-81dc-7bc927aabc75-kube-api-access-vzb92\") pod \"coredns-66bc5c9577-fqf48\" (UID: \"1cf18dfd-56e7-4d93-81dc-7bc927aabc75\") " pod="kube-system/coredns-66bc5c9577-fqf48" Jan 23 00:58:52.398291 kubelet[2813]: I0123 00:58:52.396981 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwsr\" (UniqueName: \"kubernetes.io/projected/1226503f-d3f5-44b3-bde1-f270917649eb-kube-api-access-mzwsr\") pod \"calico-apiserver-8fc7f6fd7-2br74\" (UID: \"1226503f-d3f5-44b3-bde1-f270917649eb\") " pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" Jan 23 00:58:52.398291 kubelet[2813]: I0123 00:58:52.397045 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c1f677-ec37-4544-9262-69c2ea18781d-tigera-ca-bundle\") pod \"calico-kube-controllers-5f9c4644d-vck6k\" (UID: \"a4c1f677-ec37-4544-9262-69c2ea18781d\") " pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" Jan 23 00:58:52.398291 kubelet[2813]: I0123 00:58:52.397098 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cf18dfd-56e7-4d93-81dc-7bc927aabc75-config-volume\") pod \"coredns-66bc5c9577-fqf48\" (UID: \"1cf18dfd-56e7-4d93-81dc-7bc927aabc75\") " pod="kube-system/coredns-66bc5c9577-fqf48" Jan 23 00:58:52.398291 kubelet[2813]: I0123 00:58:52.397125 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82-config\") pod \"goldmane-7c778bb748-cfwbw\" (UID: \"c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82\") " pod="calico-system/goldmane-7c778bb748-cfwbw" Jan 23 00:58:52.398599 kubelet[2813]: I0123 00:58:52.397150 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82-goldmane-key-pair\") pod \"goldmane-7c778bb748-cfwbw\" (UID: \"c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82\") " pod="calico-system/goldmane-7c778bb748-cfwbw" Jan 23 00:58:52.398599 kubelet[2813]: I0123 00:58:52.397181 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjrr\" (UniqueName: \"kubernetes.io/projected/c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82-kube-api-access-jpjrr\") pod \"goldmane-7c778bb748-cfwbw\" (UID: \"c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82\") " pod="calico-system/goldmane-7c778bb748-cfwbw" Jan 23 00:58:52.398599 kubelet[2813]: I0123 00:58:52.397222 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9fl\" (UniqueName: \"kubernetes.io/projected/a4c1f677-ec37-4544-9262-69c2ea18781d-kube-api-access-5b9fl\") pod \"calico-kube-controllers-5f9c4644d-vck6k\" (UID: \"a4c1f677-ec37-4544-9262-69c2ea18781d\") " pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" Jan 23 00:58:52.409109 systemd[1]: Created slice kubepods-besteffort-podc6fc1adc_08eb_414c_90b1_3cc3c5ed0e82.slice - libcontainer container kubepods-besteffort-podc6fc1adc_08eb_414c_90b1_3cc3c5ed0e82.slice. Jan 23 00:58:52.492003 containerd[1536]: time="2026-01-23T00:58:52.491947926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5mh2p,Uid:8d4a98f1-53d3-4f88-92ce-8fea82a35989,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:52.604528 containerd[1536]: time="2026-01-23T00:58:52.603502136Z" level=error msg="Failed to destroy network for sandbox \"f5a3b54f91132fc5de811fa7f7bf5e2c06800806f979ae3e33d5a18e085d605f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.607286 containerd[1536]: time="2026-01-23T00:58:52.607140271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-dhdhc,Uid:68595a98-b8d6-439d-8c23-9d5c1a8e3d45,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a3b54f91132fc5de811fa7f7bf5e2c06800806f979ae3e33d5a18e085d605f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.609085 kubelet[2813]: E0123 00:58:52.608477 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a3b54f91132fc5de811fa7f7bf5e2c06800806f979ae3e33d5a18e085d605f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.609085 kubelet[2813]: E0123 00:58:52.608580 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a3b54f91132fc5de811fa7f7bf5e2c06800806f979ae3e33d5a18e085d605f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" Jan 23 00:58:52.609085 kubelet[2813]: E0123 00:58:52.608614 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a3b54f91132fc5de811fa7f7bf5e2c06800806f979ae3e33d5a18e085d605f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" Jan 23 00:58:52.609387 containerd[1536]: time="2026-01-23T00:58:52.608702022Z" level=error msg="Failed to destroy network for sandbox \"b623b405debecadf8fc50ec4c14278a6ee6ebb66c4ff0462fad4152533244122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.609457 kubelet[2813]: E0123 00:58:52.608689 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8fc7f6fd7-dhdhc_calico-apiserver(68595a98-b8d6-439d-8c23-9d5c1a8e3d45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8fc7f6fd7-dhdhc_calico-apiserver(68595a98-b8d6-439d-8c23-9d5c1a8e3d45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5a3b54f91132fc5de811fa7f7bf5e2c06800806f979ae3e33d5a18e085d605f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:58:52.612291 containerd[1536]: time="2026-01-23T00:58:52.610699412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8d6bc8dd4-bt4gl,Uid:31c15f9f-ad6e-46e0-8a36-0d164e7c4eed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b623b405debecadf8fc50ec4c14278a6ee6ebb66c4ff0462fad4152533244122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.612450 kubelet[2813]: E0123 00:58:52.612107 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b623b405debecadf8fc50ec4c14278a6ee6ebb66c4ff0462fad4152533244122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.612450 kubelet[2813]: E0123 00:58:52.612178 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b623b405debecadf8fc50ec4c14278a6ee6ebb66c4ff0462fad4152533244122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8d6bc8dd4-bt4gl" Jan 23 00:58:52.612450 kubelet[2813]: E0123 00:58:52.612210 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b623b405debecadf8fc50ec4c14278a6ee6ebb66c4ff0462fad4152533244122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8d6bc8dd4-bt4gl" Jan 23 00:58:52.612727 kubelet[2813]: E0123 00:58:52.612619 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8d6bc8dd4-bt4gl_calico-system(31c15f9f-ad6e-46e0-8a36-0d164e7c4eed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8d6bc8dd4-bt4gl_calico-system(31c15f9f-ad6e-46e0-8a36-0d164e7c4eed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b623b405debecadf8fc50ec4c14278a6ee6ebb66c4ff0462fad4152533244122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8d6bc8dd4-bt4gl" podUID="31c15f9f-ad6e-46e0-8a36-0d164e7c4eed" Jan 23 00:58:52.637430 containerd[1536]: time="2026-01-23T00:58:52.637368249Z" level=error msg="Failed to destroy network for sandbox \"a890439f88d63d3c16069ae071e71247471eead6ca857218785b4f11c10bdfc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.638958 containerd[1536]: time="2026-01-23T00:58:52.638894820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5mh2p,Uid:8d4a98f1-53d3-4f88-92ce-8fea82a35989,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a890439f88d63d3c16069ae071e71247471eead6ca857218785b4f11c10bdfc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.639430 kubelet[2813]: E0123 00:58:52.639174 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a890439f88d63d3c16069ae071e71247471eead6ca857218785b4f11c10bdfc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.639430 kubelet[2813]: E0123 00:58:52.639236 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a890439f88d63d3c16069ae071e71247471eead6ca857218785b4f11c10bdfc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5mh2p" Jan 23 00:58:52.639430 kubelet[2813]: E0123 00:58:52.639302 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a890439f88d63d3c16069ae071e71247471eead6ca857218785b4f11c10bdfc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5mh2p" Jan 23 00:58:52.639752 kubelet[2813]: E0123 00:58:52.639406 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5mh2p_kube-system(8d4a98f1-53d3-4f88-92ce-8fea82a35989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5mh2p_kube-system(8d4a98f1-53d3-4f88-92ce-8fea82a35989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a890439f88d63d3c16069ae071e71247471eead6ca857218785b4f11c10bdfc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5mh2p" podUID="8d4a98f1-53d3-4f88-92ce-8fea82a35989" Jan 23 00:58:52.656426 containerd[1536]: time="2026-01-23T00:58:52.656367136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqf48,Uid:1cf18dfd-56e7-4d93-81dc-7bc927aabc75,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:52.682784 containerd[1536]: time="2026-01-23T00:58:52.682721286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-2br74,Uid:1226503f-d3f5-44b3-bde1-f270917649eb,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:58:52.704045 containerd[1536]: time="2026-01-23T00:58:52.703562518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c4644d-vck6k,Uid:a4c1f677-ec37-4544-9262-69c2ea18781d,Namespace:calico-system,Attempt:0,}" Jan 23 00:58:52.726790 containerd[1536]: time="2026-01-23T00:58:52.726352753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cfwbw,Uid:c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82,Namespace:calico-system,Attempt:0,}" Jan 23 00:58:52.775754 systemd[1]: run-netns-cni\x2d5b3e0ed3\x2d0c2e\x2d298e\x2d731a\x2d35298df519bb.mount: Deactivated successfully. Jan 23 00:58:52.775905 systemd[1]: run-netns-cni\x2d3912c1ba\x2d175c\x2d95a8\x2d43e9\x2d9b4905b734e7.mount: Deactivated successfully. Jan 23 00:58:52.828188 containerd[1536]: time="2026-01-23T00:58:52.828012694Z" level=error msg="Failed to destroy network for sandbox \"2b28a29681a8d08a545d15c66f4ed51ecdcbcc027279ee81398bf157f65f108f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.835214 containerd[1536]: time="2026-01-23T00:58:52.833388649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqf48,Uid:1cf18dfd-56e7-4d93-81dc-7bc927aabc75,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28a29681a8d08a545d15c66f4ed51ecdcbcc027279ee81398bf157f65f108f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.833500 systemd[1]: run-netns-cni\x2d64494cad\x2d6b1a\x2d40e5\x2d0959\x2d61655a29474b.mount: Deactivated successfully. Jan 23 00:58:52.839907 kubelet[2813]: E0123 00:58:52.839738 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28a29681a8d08a545d15c66f4ed51ecdcbcc027279ee81398bf157f65f108f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.840248 kubelet[2813]: E0123 00:58:52.840050 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28a29681a8d08a545d15c66f4ed51ecdcbcc027279ee81398bf157f65f108f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fqf48" Jan 23 00:58:52.840248 kubelet[2813]: E0123 00:58:52.840206 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b28a29681a8d08a545d15c66f4ed51ecdcbcc027279ee81398bf157f65f108f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fqf48" Jan 23 00:58:52.840767 kubelet[2813]: E0123 00:58:52.840608 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fqf48_kube-system(1cf18dfd-56e7-4d93-81dc-7bc927aabc75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fqf48_kube-system(1cf18dfd-56e7-4d93-81dc-7bc927aabc75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b28a29681a8d08a545d15c66f4ed51ecdcbcc027279ee81398bf157f65f108f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fqf48" podUID="1cf18dfd-56e7-4d93-81dc-7bc927aabc75" Jan 23 00:58:52.899756 containerd[1536]: time="2026-01-23T00:58:52.899611231Z" level=error msg="Failed to destroy network for sandbox \"27ba18e80ab7aa6eb4104dc54425e2a51c5df230e8946230f461ae56193a1def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.907107 systemd[1]: run-netns-cni\x2df28b0b4b\x2d4389\x2d3487\x2de02c\x2d08cdeb5817b5.mount: Deactivated successfully. Jan 23 00:58:52.908397 containerd[1536]: time="2026-01-23T00:58:52.906056554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-2br74,Uid:1226503f-d3f5-44b3-bde1-f270917649eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ba18e80ab7aa6eb4104dc54425e2a51c5df230e8946230f461ae56193a1def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.911327 kubelet[2813]: E0123 00:58:52.910586 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ba18e80ab7aa6eb4104dc54425e2a51c5df230e8946230f461ae56193a1def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.911724 kubelet[2813]: E0123 00:58:52.911538 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ba18e80ab7aa6eb4104dc54425e2a51c5df230e8946230f461ae56193a1def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" Jan 23 00:58:52.911724 kubelet[2813]: E0123 00:58:52.911584 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ba18e80ab7aa6eb4104dc54425e2a51c5df230e8946230f461ae56193a1def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" Jan 23 00:58:52.911724 kubelet[2813]: E0123 00:58:52.911674 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8fc7f6fd7-2br74_calico-apiserver(1226503f-d3f5-44b3-bde1-f270917649eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8fc7f6fd7-2br74_calico-apiserver(1226503f-d3f5-44b3-bde1-f270917649eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27ba18e80ab7aa6eb4104dc54425e2a51c5df230e8946230f461ae56193a1def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:58:52.929094 systemd[1]: Created slice kubepods-besteffort-pod0fe4dc2f_0955_4a3d_81e6_f5a1e1ac845e.slice - libcontainer container kubepods-besteffort-pod0fe4dc2f_0955_4a3d_81e6_f5a1e1ac845e.slice. Jan 23 00:58:52.937447 containerd[1536]: time="2026-01-23T00:58:52.937298480Z" level=error msg="Failed to destroy network for sandbox \"7b46d2211d187382b96b359f5d2fb79a76e44afe36806218112993958a661701\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.944111 containerd[1536]: time="2026-01-23T00:58:52.943002640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsx8,Uid:0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e,Namespace:calico-system,Attempt:0,}" Jan 23 00:58:52.943064 systemd[1]: run-netns-cni\x2d1916e60c\x2de219\x2d7a8e\x2dd301\x2d8eb1c4110042.mount: Deactivated successfully. Jan 23 00:58:52.945349 containerd[1536]: time="2026-01-23T00:58:52.945245066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c4644d-vck6k,Uid:a4c1f677-ec37-4544-9262-69c2ea18781d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b46d2211d187382b96b359f5d2fb79a76e44afe36806218112993958a661701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.946764 kubelet[2813]: E0123 00:58:52.946501 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b46d2211d187382b96b359f5d2fb79a76e44afe36806218112993958a661701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.946764 kubelet[2813]: E0123 00:58:52.946585 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b46d2211d187382b96b359f5d2fb79a76e44afe36806218112993958a661701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" Jan 23 00:58:52.946764 kubelet[2813]: E0123 00:58:52.946619 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b46d2211d187382b96b359f5d2fb79a76e44afe36806218112993958a661701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" Jan 23 00:58:52.947044 kubelet[2813]: E0123 00:58:52.946702 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f9c4644d-vck6k_calico-system(a4c1f677-ec37-4544-9262-69c2ea18781d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f9c4644d-vck6k_calico-system(a4c1f677-ec37-4544-9262-69c2ea18781d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b46d2211d187382b96b359f5d2fb79a76e44afe36806218112993958a661701\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:58:52.975236 containerd[1536]: time="2026-01-23T00:58:52.975120258Z" level=error msg="Failed to destroy network for sandbox \"7250a501d8a2af8e728788349d50c196afe45bb5039e6d752262b251dfd1d678\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.977275 containerd[1536]: time="2026-01-23T00:58:52.977169500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cfwbw,Uid:c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7250a501d8a2af8e728788349d50c196afe45bb5039e6d752262b251dfd1d678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.978295 kubelet[2813]: E0123 00:58:52.977950 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7250a501d8a2af8e728788349d50c196afe45bb5039e6d752262b251dfd1d678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:52.978295 kubelet[2813]: E0123 00:58:52.978021 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7250a501d8a2af8e728788349d50c196afe45bb5039e6d752262b251dfd1d678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-cfwbw" Jan 23 00:58:52.978295 kubelet[2813]: E0123 00:58:52.978052 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7250a501d8a2af8e728788349d50c196afe45bb5039e6d752262b251dfd1d678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-cfwbw" Jan 23 00:58:52.978560 kubelet[2813]: E0123 00:58:52.978126 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-cfwbw_calico-system(c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-cfwbw_calico-system(c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7250a501d8a2af8e728788349d50c196afe45bb5039e6d752262b251dfd1d678\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 00:58:53.028809 containerd[1536]: time="2026-01-23T00:58:53.028653339Z" level=error msg="Failed to destroy network for sandbox \"472c5807a2a538e667978291c915fbe12864249a322973b6f69d85c623fe32c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:53.030741 containerd[1536]: time="2026-01-23T00:58:53.030671613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsx8,Uid:0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"472c5807a2a538e667978291c915fbe12864249a322973b6f69d85c623fe32c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:53.031137 kubelet[2813]: E0123 00:58:53.031075 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472c5807a2a538e667978291c915fbe12864249a322973b6f69d85c623fe32c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:58:53.031898 kubelet[2813]: E0123 00:58:53.031164 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472c5807a2a538e667978291c915fbe12864249a322973b6f69d85c623fe32c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:53.031898 kubelet[2813]: E0123 00:58:53.031196 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472c5807a2a538e667978291c915fbe12864249a322973b6f69d85c623fe32c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvsx8" Jan 23 00:58:53.031898 kubelet[2813]: E0123 00:58:53.031302 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"472c5807a2a538e667978291c915fbe12864249a322973b6f69d85c623fe32c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:58:53.145220 containerd[1536]: time="2026-01-23T00:58:53.144742832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 00:58:53.742519 systemd[1]: run-netns-cni\x2d440995c8\x2d7162\x2d8f8c\x2d8e62\x2d43707729aa9e.mount: Deactivated successfully. Jan 23 00:58:53.742657 systemd[1]: run-netns-cni\x2d704739e3\x2d4e8c\x2dcb5c\x2d55f8\x2d0e8fc6fafbdd.mount: Deactivated successfully. Jan 23 00:58:59.951931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154687280.mount: Deactivated successfully. Jan 23 00:58:59.984390 containerd[1536]: time="2026-01-23T00:58:59.984168288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:59.986947 containerd[1536]: time="2026-01-23T00:58:59.986670851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 00:58:59.988342 containerd[1536]: time="2026-01-23T00:58:59.988237177Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:59.991930 containerd[1536]: time="2026-01-23T00:58:59.991850795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:59.993029 containerd[1536]: time="2026-01-23T00:58:59.992984022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.848119231s" Jan 23 00:58:59.993252 containerd[1536]: time="2026-01-23T00:58:59.993219724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 00:59:00.025529 containerd[1536]: time="2026-01-23T00:59:00.025475110Z" level=info msg="CreateContainer within sandbox \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 00:59:00.040551 containerd[1536]: time="2026-01-23T00:59:00.040493706Z" level=info msg="Container 4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:00.049601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525893908.mount: Deactivated successfully. Jan 23 00:59:00.060857 containerd[1536]: time="2026-01-23T00:59:00.060794264Z" level=info msg="CreateContainer within sandbox \"2cbca08f2141067171f3244e1fb5dab597a60cc2dd5e08f10e29d74625cd0c0d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0\"" Jan 23 00:59:00.061985 containerd[1536]: time="2026-01-23T00:59:00.061792625Z" level=info msg="StartContainer for \"4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0\"" Jan 23 00:59:00.064430 containerd[1536]: time="2026-01-23T00:59:00.064379610Z" level=info msg="connecting to shim 4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0" address="unix:///run/containerd/s/419ade9c8d34e7dca07d01200a05439a2bb52205bc8530b7c88dc0ade158f504" protocol=ttrpc version=3 Jan 23 00:59:00.089494 systemd[1]: Started cri-containerd-4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0.scope - libcontainer container 4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0. Jan 23 00:59:00.211197 containerd[1536]: time="2026-01-23T00:59:00.211049048Z" level=info msg="StartContainer for \"4b2bf40a81abc01e1910130e06200d6a530af8f838dbbf01dcb2614d7fcfeae0\" returns successfully" Jan 23 00:59:00.336558 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 00:59:00.336974 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 00:59:00.564294 kubelet[2813]: I0123 00:59:00.562826 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-backend-key-pair\") pod \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\" (UID: \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\") " Jan 23 00:59:00.565495 kubelet[2813]: I0123 00:59:00.565464 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmnln\" (UniqueName: \"kubernetes.io/projected/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-kube-api-access-rmnln\") pod \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\" (UID: \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\") " Jan 23 00:59:00.567396 kubelet[2813]: I0123 00:59:00.565592 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-ca-bundle\") pod \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\" (UID: \"31c15f9f-ad6e-46e0-8a36-0d164e7c4eed\") " Jan 23 00:59:00.567396 kubelet[2813]: I0123 00:59:00.566097 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "31c15f9f-ad6e-46e0-8a36-0d164e7c4eed" (UID: "31c15f9f-ad6e-46e0-8a36-0d164e7c4eed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:59:00.573014 kubelet[2813]: I0123 00:59:00.571849 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "31c15f9f-ad6e-46e0-8a36-0d164e7c4eed" (UID: "31c15f9f-ad6e-46e0-8a36-0d164e7c4eed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:59:00.576750 kubelet[2813]: I0123 00:59:00.576696 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-kube-api-access-rmnln" (OuterVolumeSpecName: "kube-api-access-rmnln") pod "31c15f9f-ad6e-46e0-8a36-0d164e7c4eed" (UID: "31c15f9f-ad6e-46e0-8a36-0d164e7c4eed"). InnerVolumeSpecName "kube-api-access-rmnln". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:59:00.666860 kubelet[2813]: I0123 00:59:00.666800 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-ca-bundle\") on node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" DevicePath \"\"" Jan 23 00:59:00.666860 kubelet[2813]: I0123 00:59:00.666852 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-whisker-backend-key-pair\") on node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" DevicePath \"\"" Jan 23 00:59:00.666860 kubelet[2813]: I0123 00:59:00.666872 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmnln\" (UniqueName: \"kubernetes.io/projected/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed-kube-api-access-rmnln\") on node \"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512\" DevicePath \"\"" Jan 23 00:59:00.922149 systemd[1]: Removed slice kubepods-besteffort-pod31c15f9f_ad6e_46e0_8a36_0d164e7c4eed.slice - libcontainer container kubepods-besteffort-pod31c15f9f_ad6e_46e0_8a36_0d164e7c4eed.slice. Jan 23 00:59:00.952525 systemd[1]: var-lib-kubelet-pods-31c15f9f\x2dad6e\x2d46e0\x2d8a36\x2d0d164e7c4eed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmnln.mount: Deactivated successfully. Jan 23 00:59:00.952776 systemd[1]: var-lib-kubelet-pods-31c15f9f\x2dad6e\x2d46e0\x2d8a36\x2d0d164e7c4eed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 00:59:01.206497 kubelet[2813]: I0123 00:59:01.204956 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8p6pq" podStartSLOduration=1.8376004689999998 podStartE2EDuration="20.204934022s" podCreationTimestamp="2026-01-23 00:58:41 +0000 UTC" firstStartedPulling="2026-01-23 00:58:41.627381637 +0000 UTC m=+24.925068310" lastFinishedPulling="2026-01-23 00:58:59.994715186 +0000 UTC m=+43.292401863" observedRunningTime="2026-01-23 00:59:01.202926309 +0000 UTC m=+44.500613047" watchObservedRunningTime="2026-01-23 00:59:01.204934022 +0000 UTC m=+44.502620723" Jan 23 00:59:01.282478 systemd[1]: Created slice kubepods-besteffort-podc53a342e_2053_4bcb_9132_f1ca510f3ccf.slice - libcontainer container kubepods-besteffort-podc53a342e_2053_4bcb_9132_f1ca510f3ccf.slice. Jan 23 00:59:01.371953 kubelet[2813]: I0123 00:59:01.371891 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c53a342e-2053-4bcb-9132-f1ca510f3ccf-whisker-backend-key-pair\") pod \"whisker-7956844f94-rcgtz\" (UID: \"c53a342e-2053-4bcb-9132-f1ca510f3ccf\") " pod="calico-system/whisker-7956844f94-rcgtz" Jan 23 00:59:01.371953 kubelet[2813]: I0123 00:59:01.371952 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c53a342e-2053-4bcb-9132-f1ca510f3ccf-whisker-ca-bundle\") pod \"whisker-7956844f94-rcgtz\" (UID: \"c53a342e-2053-4bcb-9132-f1ca510f3ccf\") " pod="calico-system/whisker-7956844f94-rcgtz" Jan 23 00:59:01.372238 kubelet[2813]: I0123 00:59:01.371984 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnr2\" (UniqueName: \"kubernetes.io/projected/c53a342e-2053-4bcb-9132-f1ca510f3ccf-kube-api-access-mxnr2\") pod \"whisker-7956844f94-rcgtz\" (UID: \"c53a342e-2053-4bcb-9132-f1ca510f3ccf\") " pod="calico-system/whisker-7956844f94-rcgtz" Jan 23 00:59:01.590725 containerd[1536]: time="2026-01-23T00:59:01.590652296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7956844f94-rcgtz,Uid:c53a342e-2053-4bcb-9132-f1ca510f3ccf,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:01.736802 systemd-networkd[1427]: calibe080d572a5: Link UP Jan 23 00:59:01.739021 systemd-networkd[1427]: calibe080d572a5: Gained carrier Jan 23 00:59:01.772445 containerd[1536]: 2026-01-23 00:59:01.628 [INFO][3924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 00:59:01.772445 containerd[1536]: 2026-01-23 00:59:01.642 [INFO][3924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0 whisker-7956844f94- calico-system c53a342e-2053-4bcb-9132-f1ca510f3ccf 888 0 2026-01-23 00:59:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7956844f94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 whisker-7956844f94-rcgtz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibe080d572a5 [] [] }} ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-" Jan 23 00:59:01.772445 containerd[1536]: 2026-01-23 00:59:01.642 [INFO][3924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.772445 containerd[1536]: 2026-01-23 00:59:01.673 [INFO][3936] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" HandleID="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.673 [INFO][3936] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" HandleID="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"whisker-7956844f94-rcgtz", "timestamp":"2026-01-23 00:59:01.673549646 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.673 [INFO][3936] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.673 [INFO][3936] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.673 [INFO][3936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.682 [INFO][3936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.690 [INFO][3936] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.695 [INFO][3936] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.772996 containerd[1536]: 2026-01-23 00:59:01.697 [INFO][3936] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.700 [INFO][3936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.700 [INFO][3936] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.702 [INFO][3936] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.708 [INFO][3936] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.716 [INFO][3936] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.716 [INFO][3936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.716 [INFO][3936] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:01.775382 containerd[1536]: 2026-01-23 00:59:01.716 [INFO][3936] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" HandleID="k8s-pod-network.8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.775811 containerd[1536]: 2026-01-23 00:59:01.721 [INFO][3924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0", GenerateName:"whisker-7956844f94-", Namespace:"calico-system", SelfLink:"", UID:"c53a342e-2053-4bcb-9132-f1ca510f3ccf", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7956844f94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"whisker-7956844f94-rcgtz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibe080d572a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:01.775954 containerd[1536]: 2026-01-23 00:59:01.721 [INFO][3924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.775954 containerd[1536]: 2026-01-23 00:59:01.721 [INFO][3924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe080d572a5 ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.775954 containerd[1536]: 2026-01-23 00:59:01.740 [INFO][3924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.776108 containerd[1536]: 2026-01-23 00:59:01.740 [INFO][3924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0", GenerateName:"whisker-7956844f94-", Namespace:"calico-system", SelfLink:"", UID:"c53a342e-2053-4bcb-9132-f1ca510f3ccf", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7956844f94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d", Pod:"whisker-7956844f94-rcgtz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibe080d572a5", MAC:"9e:89:75:fb:e6:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:01.776234 containerd[1536]: 2026-01-23 00:59:01.763 [INFO][3924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" Namespace="calico-system" Pod="whisker-7956844f94-rcgtz" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-whisker--7956844f94--rcgtz-eth0" Jan 23 00:59:01.829094 containerd[1536]: time="2026-01-23T00:59:01.828986147Z" level=info msg="connecting to shim 8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d" address="unix:///run/containerd/s/fca635849b8205d50242cf10721f4c3d85b084655f3153f96968fdad6a43d036" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:01.907852 systemd[1]: Started cri-containerd-8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d.scope - libcontainer container 8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d. Jan 23 00:59:02.116292 containerd[1536]: time="2026-01-23T00:59:02.115924407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7956844f94-rcgtz,Uid:c53a342e-2053-4bcb-9132-f1ca510f3ccf,Namespace:calico-system,Attempt:0,} returns sandbox id \"8db904080a1921b670e5decb9b54eaabb1ec94d974c5bfc0ddccfec596ebff9d\"" Jan 23 00:59:02.119197 containerd[1536]: time="2026-01-23T00:59:02.118880985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:59:02.288903 containerd[1536]: time="2026-01-23T00:59:02.288837727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:02.290614 containerd[1536]: time="2026-01-23T00:59:02.290510333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:59:02.291130 containerd[1536]: time="2026-01-23T00:59:02.290543617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:59:02.291597 kubelet[2813]: E0123 00:59:02.291495 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:02.295045 kubelet[2813]: E0123 00:59:02.292880 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:02.295045 kubelet[2813]: E0123 00:59:02.293129 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7956844f94-rcgtz_calico-system(c53a342e-2053-4bcb-9132-f1ca510f3ccf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:02.297045 containerd[1536]: time="2026-01-23T00:59:02.297008515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:59:02.464650 containerd[1536]: time="2026-01-23T00:59:02.464588846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:02.467175 containerd[1536]: time="2026-01-23T00:59:02.466538092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:59:02.467175 containerd[1536]: time="2026-01-23T00:59:02.466654500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:02.468909 kubelet[2813]: E0123 00:59:02.468730 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:02.469047 kubelet[2813]: E0123 00:59:02.468924 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:02.469711 kubelet[2813]: E0123 00:59:02.469669 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7956844f94-rcgtz_calico-system(c53a342e-2053-4bcb-9132-f1ca510f3ccf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:02.470848 kubelet[2813]: E0123 00:59:02.469862 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 00:59:02.754166 systemd-networkd[1427]: calibe080d572a5: Gained IPv6LL Jan 23 00:59:02.922155 kubelet[2813]: I0123 00:59:02.921847 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c15f9f-ad6e-46e0-8a36-0d164e7c4eed" path="/var/lib/kubelet/pods/31c15f9f-ad6e-46e0-8a36-0d164e7c4eed/volumes" Jan 23 00:59:02.939012 systemd-networkd[1427]: vxlan.calico: Link UP Jan 23 00:59:02.940315 systemd-networkd[1427]: vxlan.calico: Gained carrier Jan 23 00:59:03.201192 kubelet[2813]: E0123 00:59:03.200924 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 00:59:04.034061 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Jan 23 00:59:05.917371 containerd[1536]: time="2026-01-23T00:59:05.917297020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsx8,Uid:0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:05.919314 containerd[1536]: time="2026-01-23T00:59:05.919171025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqf48,Uid:1cf18dfd-56e7-4d93-81dc-7bc927aabc75,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:06.125503 systemd-networkd[1427]: cali9cd5e3cb4a6: Link UP Jan 23 00:59:06.128055 systemd-networkd[1427]: cali9cd5e3cb4a6: Gained carrier Jan 23 00:59:06.157400 containerd[1536]: 2026-01-23 00:59:05.995 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0 coredns-66bc5c9577- kube-system 1cf18dfd-56e7-4d93-81dc-7bc927aabc75 820 0 2026-01-23 00:58:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 coredns-66bc5c9577-fqf48 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9cd5e3cb4a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-" Jan 23 00:59:06.157400 containerd[1536]: 2026-01-23 00:59:05.995 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.157400 containerd[1536]: 2026-01-23 00:59:06.053 [INFO][4216] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" HandleID="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.053 [INFO][4216] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" HandleID="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"coredns-66bc5c9577-fqf48", "timestamp":"2026-01-23 00:59:06.05318738 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.053 [INFO][4216] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.053 [INFO][4216] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.053 [INFO][4216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.067 [INFO][4216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.073 [INFO][4216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.078 [INFO][4216] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158186 containerd[1536]: 2026-01-23 00:59:06.081 [INFO][4216] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.086 [INFO][4216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.086 [INFO][4216] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.088 [INFO][4216] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9 Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.097 [INFO][4216] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.109 [INFO][4216] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.109 [INFO][4216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.109 [INFO][4216] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:06.158670 containerd[1536]: 2026-01-23 00:59:06.109 [INFO][4216] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" HandleID="k8s-pod-network.ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.159079 containerd[1536]: 2026-01-23 00:59:06.114 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1cf18dfd-56e7-4d93-81dc-7bc927aabc75", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"coredns-66bc5c9577-fqf48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cd5e3cb4a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:06.159079 containerd[1536]: 2026-01-23 00:59:06.114 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.159079 containerd[1536]: 2026-01-23 00:59:06.114 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cd5e3cb4a6 ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.159079 containerd[1536]: 2026-01-23 00:59:06.130 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.160941 containerd[1536]: 2026-01-23 00:59:06.132 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1cf18dfd-56e7-4d93-81dc-7bc927aabc75", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9", Pod:"coredns-66bc5c9577-fqf48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cd5e3cb4a6", MAC:"1a:1a:c8:75:d8:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:06.160941 containerd[1536]: 2026-01-23 00:59:06.154 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" Namespace="kube-system" Pod="coredns-66bc5c9577-fqf48" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--fqf48-eth0" Jan 23 00:59:06.232698 containerd[1536]: time="2026-01-23T00:59:06.230532057Z" level=info msg="connecting to shim ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9" address="unix:///run/containerd/s/b7a6613fa85d3561176b737bb48eaf544bdbe2d4d5323924b36d7f45d1894dac" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:06.257461 systemd-networkd[1427]: cali1ad2122e7fe: Link UP Jan 23 00:59:06.261392 systemd-networkd[1427]: cali1ad2122e7fe: Gained carrier Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.009 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0 csi-node-driver- calico-system 0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e 714 0 2026-01-23 00:58:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 csi-node-driver-cvsx8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1ad2122e7fe [] [] }} ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.009 [INFO][4190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.062 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" HandleID="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.063 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" HandleID="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d52a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"csi-node-driver-cvsx8", "timestamp":"2026-01-23 00:59:06.062633666 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.063 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.110 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.110 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.168 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.180 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.193 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.200 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.205 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.205 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.209 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.223 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.237 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.238 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.238 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:06.315158 containerd[1536]: 2026-01-23 00:59:06.238 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" HandleID="k8s-pod-network.7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.317573 containerd[1536]: 2026-01-23 00:59:06.244 [INFO][4190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"csi-node-driver-cvsx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ad2122e7fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:06.317573 containerd[1536]: 2026-01-23 00:59:06.244 [INFO][4190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.317573 containerd[1536]: 2026-01-23 00:59:06.245 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ad2122e7fe ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.317573 containerd[1536]: 2026-01-23 00:59:06.267 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.317573 containerd[1536]: 2026-01-23 00:59:06.275 [INFO][4190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a", Pod:"csi-node-driver-cvsx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ad2122e7fe", MAC:"86:59:8e:c4:61:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:06.317573 containerd[1536]: 2026-01-23 00:59:06.305 [INFO][4190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" Namespace="calico-system" Pod="csi-node-driver-cvsx8" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-csi--node--driver--cvsx8-eth0" Jan 23 00:59:06.340600 systemd[1]: Started cri-containerd-ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9.scope - libcontainer container ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9. Jan 23 00:59:06.390304 containerd[1536]: time="2026-01-23T00:59:06.389454538Z" level=info msg="connecting to shim 7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a" address="unix:///run/containerd/s/68f84645a7675ed6ae69aaac6c403d0f394fcfdeb717d7eabb3d5fe1f24fe7c3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:06.451586 systemd[1]: Started cri-containerd-7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a.scope - libcontainer container 7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a. Jan 23 00:59:06.495025 containerd[1536]: time="2026-01-23T00:59:06.494763790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fqf48,Uid:1cf18dfd-56e7-4d93-81dc-7bc927aabc75,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9\"" Jan 23 00:59:06.505342 containerd[1536]: time="2026-01-23T00:59:06.504979723Z" level=info msg="CreateContainer within sandbox \"ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:59:06.519462 containerd[1536]: time="2026-01-23T00:59:06.519415877Z" level=info msg="Container cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:06.531600 containerd[1536]: time="2026-01-23T00:59:06.530148164Z" level=info msg="CreateContainer within sandbox \"ebaa20dc49438e96a720720f68beaf539805d548af8d91197f1696ba8ba924e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f\"" Jan 23 00:59:06.537369 containerd[1536]: time="2026-01-23T00:59:06.537184011Z" level=info msg="StartContainer for \"cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f\"" Jan 23 00:59:06.543155 containerd[1536]: time="2026-01-23T00:59:06.543110936Z" level=info msg="connecting to shim cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f" address="unix:///run/containerd/s/b7a6613fa85d3561176b737bb48eaf544bdbe2d4d5323924b36d7f45d1894dac" protocol=ttrpc version=3 Jan 23 00:59:06.565951 containerd[1536]: time="2026-01-23T00:59:06.565459251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsx8,Uid:0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b0ee7ba4f40ae3aadf08ffcc11194ca90813d5cc8bfcf438c1aad884f96005a\"" Jan 23 00:59:06.572342 containerd[1536]: time="2026-01-23T00:59:06.571021181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:59:06.594556 systemd[1]: Started cri-containerd-cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f.scope - libcontainer container cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f. Jan 23 00:59:06.672078 containerd[1536]: time="2026-01-23T00:59:06.672012809Z" level=info msg="StartContainer for \"cd2c85d54325658f4b526d420afcfa92ec1892952f1b4f91b9b26d882dde5e7f\" returns successfully" Jan 23 00:59:06.735768 containerd[1536]: time="2026-01-23T00:59:06.735577000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:06.737633 containerd[1536]: time="2026-01-23T00:59:06.737479886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:59:06.737633 containerd[1536]: time="2026-01-23T00:59:06.737596350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:59:06.738316 kubelet[2813]: E0123 00:59:06.738093 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:06.738316 kubelet[2813]: E0123 00:59:06.738154 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:06.739476 kubelet[2813]: E0123 00:59:06.738977 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:06.740870 containerd[1536]: time="2026-01-23T00:59:06.740825155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:59:06.904500 containerd[1536]: time="2026-01-23T00:59:06.904427414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:06.906219 containerd[1536]: time="2026-01-23T00:59:06.906151700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:59:06.906595 containerd[1536]: time="2026-01-23T00:59:06.906199653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:59:06.906665 kubelet[2813]: E0123 00:59:06.906502 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:06.906665 kubelet[2813]: E0123 00:59:06.906564 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:06.906782 kubelet[2813]: E0123 00:59:06.906668 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:06.906782 kubelet[2813]: E0123 00:59:06.906728 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:59:06.918906 containerd[1536]: time="2026-01-23T00:59:06.918553691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-2br74,Uid:1226503f-d3f5-44b3-bde1-f270917649eb,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:59:06.922420 containerd[1536]: time="2026-01-23T00:59:06.921770605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-dhdhc,Uid:68595a98-b8d6-439d-8c23-9d5c1a8e3d45,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:59:07.125323 systemd-networkd[1427]: calia3f1bfefafe: Link UP Jan 23 00:59:07.127710 systemd-networkd[1427]: calia3f1bfefafe: Gained carrier Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.009 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0 calico-apiserver-8fc7f6fd7- calico-apiserver 68595a98-b8d6-439d-8c23-9d5c1a8e3d45 816 0 2026-01-23 00:58:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8fc7f6fd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 calico-apiserver-8fc7f6fd7-dhdhc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3f1bfefafe [] [] }} ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.009 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.071 [INFO][4408] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" HandleID="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.071 [INFO][4408] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" HandleID="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ddc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"calico-apiserver-8fc7f6fd7-dhdhc", "timestamp":"2026-01-23 00:59:07.071162762 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.071 [INFO][4408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.071 [INFO][4408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.071 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.082 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.087 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.092 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.094 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.096 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.096 [INFO][4408] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.098 [INFO][4408] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002 Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.104 [INFO][4408] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.113 [INFO][4408] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.113 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.113 [INFO][4408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:07.153712 containerd[1536]: 2026-01-23 00:59:07.113 [INFO][4408] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" HandleID="k8s-pod-network.2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.157387 containerd[1536]: 2026-01-23 00:59:07.116 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0", GenerateName:"calico-apiserver-8fc7f6fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"68595a98-b8d6-439d-8c23-9d5c1a8e3d45", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fc7f6fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"calico-apiserver-8fc7f6fd7-dhdhc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3f1bfefafe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:07.157387 containerd[1536]: 2026-01-23 00:59:07.116 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.157387 containerd[1536]: 2026-01-23 00:59:07.116 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3f1bfefafe ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.157387 containerd[1536]: 2026-01-23 00:59:07.121 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.157387 containerd[1536]: 2026-01-23 00:59:07.122 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0", GenerateName:"calico-apiserver-8fc7f6fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"68595a98-b8d6-439d-8c23-9d5c1a8e3d45", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fc7f6fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002", Pod:"calico-apiserver-8fc7f6fd7-dhdhc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3f1bfefafe", MAC:"a6:a1:2f:96:33:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:07.157387 containerd[1536]: 2026-01-23 00:59:07.146 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-dhdhc" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--dhdhc-eth0" Jan 23 00:59:07.206666 containerd[1536]: time="2026-01-23T00:59:07.206451067Z" level=info msg="connecting to shim 2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002" address="unix:///run/containerd/s/a9d3fe5b571ea2a8d27f02020b82fd337fd9024864c03a9a6dfc3a83904f432c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:07.222026 kubelet[2813]: E0123 00:59:07.221929 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:59:07.297287 kubelet[2813]: I0123 00:59:07.297013 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fqf48" podStartSLOduration=45.296991223 podStartE2EDuration="45.296991223s" podCreationTimestamp="2026-01-23 00:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:59:07.294771398 +0000 UTC m=+50.592458098" watchObservedRunningTime="2026-01-23 00:59:07.296991223 +0000 UTC m=+50.594677914" Jan 23 00:59:07.311727 systemd[1]: Started cri-containerd-2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002.scope - libcontainer container 2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002. Jan 23 00:59:07.347733 systemd-networkd[1427]: calid7d90a4c85f: Link UP Jan 23 00:59:07.350571 systemd-networkd[1427]: calid7d90a4c85f: Gained carrier Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.009 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0 calico-apiserver-8fc7f6fd7- calico-apiserver 1226503f-d3f5-44b3-bde1-f270917649eb 821 0 2026-01-23 00:58:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8fc7f6fd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 calico-apiserver-8fc7f6fd7-2br74 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid7d90a4c85f [] [] }} ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.009 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.073 [INFO][4410] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" HandleID="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.074 [INFO][4410] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" HandleID="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"calico-apiserver-8fc7f6fd7-2br74", "timestamp":"2026-01-23 00:59:07.073763297 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.074 [INFO][4410] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.113 [INFO][4410] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.113 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.190 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.207 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.243 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.258 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.272 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.276 [INFO][4410] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.281 [INFO][4410] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6 Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.308 [INFO][4410] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.327 [INFO][4410] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.327 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.329 [INFO][4410] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:07.385189 containerd[1536]: 2026-01-23 00:59:07.329 [INFO][4410] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" HandleID="k8s-pod-network.ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.389277 containerd[1536]: 2026-01-23 00:59:07.332 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0", GenerateName:"calico-apiserver-8fc7f6fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"1226503f-d3f5-44b3-bde1-f270917649eb", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fc7f6fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"calico-apiserver-8fc7f6fd7-2br74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7d90a4c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:07.389277 containerd[1536]: 2026-01-23 00:59:07.332 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.389277 containerd[1536]: 2026-01-23 00:59:07.334 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7d90a4c85f ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.389277 containerd[1536]: 2026-01-23 00:59:07.353 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.389277 containerd[1536]: 2026-01-23 00:59:07.355 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0", GenerateName:"calico-apiserver-8fc7f6fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"1226503f-d3f5-44b3-bde1-f270917649eb", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fc7f6fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6", Pod:"calico-apiserver-8fc7f6fd7-2br74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7d90a4c85f", MAC:"6a:53:f3:60:70:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:07.389277 containerd[1536]: 2026-01-23 00:59:07.379 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" Namespace="calico-apiserver" Pod="calico-apiserver-8fc7f6fd7-2br74" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--apiserver--8fc7f6fd7--2br74-eth0" Jan 23 00:59:07.425709 systemd-networkd[1427]: cali9cd5e3cb4a6: Gained IPv6LL Jan 23 00:59:07.434835 containerd[1536]: time="2026-01-23T00:59:07.434781534Z" level=info msg="connecting to shim ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6" address="unix:///run/containerd/s/10317fe83806a39f8c65c07a1ba8735dcee15447628029b8ff540faff95084dd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:07.494818 systemd[1]: Started cri-containerd-ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6.scope - libcontainer container ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6. Jan 23 00:59:07.502575 containerd[1536]: time="2026-01-23T00:59:07.502504344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-dhdhc,Uid:68595a98-b8d6-439d-8c23-9d5c1a8e3d45,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2da8a69c262cdfcfca8b2247323bb324243013759d7e4de6eefc48de67f7f002\"" Jan 23 00:59:07.506352 containerd[1536]: time="2026-01-23T00:59:07.506159318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:07.578575 containerd[1536]: time="2026-01-23T00:59:07.578524155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fc7f6fd7-2br74,Uid:1226503f-d3f5-44b3-bde1-f270917649eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ee22eb681be29375d3af9706a16d69a96dc4bf1b4a094dbd5be31f25c4b162d6\"" Jan 23 00:59:07.667475 containerd[1536]: time="2026-01-23T00:59:07.667405305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:07.668943 containerd[1536]: time="2026-01-23T00:59:07.668886086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:07.668943 containerd[1536]: time="2026-01-23T00:59:07.668899719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:07.669379 kubelet[2813]: E0123 00:59:07.669221 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:07.669379 kubelet[2813]: E0123 00:59:07.669301 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:07.670231 kubelet[2813]: E0123 00:59:07.669665 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fc7f6fd7-dhdhc_calico-apiserver(68595a98-b8d6-439d-8c23-9d5c1a8e3d45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:07.670231 kubelet[2813]: E0123 00:59:07.669723 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:59:07.670417 containerd[1536]: time="2026-01-23T00:59:07.669715284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:07.826042 containerd[1536]: time="2026-01-23T00:59:07.825977510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:07.827622 containerd[1536]: time="2026-01-23T00:59:07.827495373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:07.827622 containerd[1536]: time="2026-01-23T00:59:07.827539618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:07.828044 kubelet[2813]: E0123 00:59:07.827986 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:07.828693 kubelet[2813]: E0123 00:59:07.828041 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:07.828693 kubelet[2813]: E0123 00:59:07.828142 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fc7f6fd7-2br74_calico-apiserver(1226503f-d3f5-44b3-bde1-f270917649eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:07.828693 kubelet[2813]: E0123 00:59:07.828185 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:59:07.916944 containerd[1536]: time="2026-01-23T00:59:07.916776067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5mh2p,Uid:8d4a98f1-53d3-4f88-92ce-8fea82a35989,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:07.919616 containerd[1536]: time="2026-01-23T00:59:07.919551162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c4644d-vck6k,Uid:a4c1f677-ec37-4544-9262-69c2ea18781d,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:07.923228 containerd[1536]: time="2026-01-23T00:59:07.922968624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cfwbw,Uid:c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:08.129896 systemd-networkd[1427]: cali1ad2122e7fe: Gained IPv6LL Jan 23 00:59:08.207425 systemd-networkd[1427]: cali4a25ee8ba94: Link UP Jan 23 00:59:08.207887 systemd-networkd[1427]: cali4a25ee8ba94: Gained carrier Jan 23 00:59:08.250167 kubelet[2813]: E0123 00:59:08.249861 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.029 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0 coredns-66bc5c9577- kube-system 8d4a98f1-53d3-4f88-92ce-8fea82a35989 818 0 2026-01-23 00:58:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 coredns-66bc5c9577-5mh2p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a25ee8ba94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.029 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.105 [INFO][4565] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" HandleID="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.111 [INFO][4565] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" HandleID="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f930), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"coredns-66bc5c9577-5mh2p", "timestamp":"2026-01-23 00:59:08.105604794 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.111 [INFO][4565] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.111 [INFO][4565] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.111 [INFO][4565] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.138 [INFO][4565] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.147 [INFO][4565] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.155 [INFO][4565] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.160 [INFO][4565] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.164 [INFO][4565] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.164 [INFO][4565] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.168 [INFO][4565] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.176 [INFO][4565] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.193 [INFO][4565] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.193 [INFO][4565] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.193 [INFO][4565] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:08.253898 containerd[1536]: 2026-01-23 00:59:08.193 [INFO][4565] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" HandleID="k8s-pod-network.0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.256231 containerd[1536]: 2026-01-23 00:59:08.199 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8d4a98f1-53d3-4f88-92ce-8fea82a35989", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"coredns-66bc5c9577-5mh2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a25ee8ba94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:08.256231 containerd[1536]: 2026-01-23 00:59:08.200 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.256231 containerd[1536]: 2026-01-23 00:59:08.200 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a25ee8ba94 ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.256231 containerd[1536]: 2026-01-23 00:59:08.207 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.259045 containerd[1536]: 2026-01-23 00:59:08.207 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8d4a98f1-53d3-4f88-92ce-8fea82a35989", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e", Pod:"coredns-66bc5c9577-5mh2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a25ee8ba94", MAC:"fe:a2:0b:af:29:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:08.259045 containerd[1536]: 2026-01-23 00:59:08.240 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" Namespace="kube-system" Pod="coredns-66bc5c9577-5mh2p" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-coredns--66bc5c9577--5mh2p-eth0" Jan 23 00:59:08.267005 kubelet[2813]: E0123 00:59:08.266896 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:59:08.269010 kubelet[2813]: E0123 00:59:08.268940 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:59:08.347361 containerd[1536]: time="2026-01-23T00:59:08.347057394Z" level=info msg="connecting to shim 0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e" address="unix:///run/containerd/s/7e452cefefd3ec44273d82ac00792d3a15a9a250f293fb16f0472ee906dcc84a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:08.413094 systemd-networkd[1427]: cali0b8d85fb5c8: Link UP Jan 23 00:59:08.418587 systemd-networkd[1427]: cali0b8d85fb5c8: Gained carrier Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.093 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0 calico-kube-controllers-5f9c4644d- calico-system a4c1f677-ec37-4544-9262-69c2ea18781d 822 0 2026-01-23 00:58:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f9c4644d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 calico-kube-controllers-5f9c4644d-vck6k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b8d85fb5c8 [] [] }} ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.093 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.194 [INFO][4577] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" HandleID="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.194 [INFO][4577] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" HandleID="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000386150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"calico-kube-controllers-5f9c4644d-vck6k", "timestamp":"2026-01-23 00:59:08.194417418 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.195 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.195 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.195 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.244 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.272 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.288 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.303 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.324 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.325 [INFO][4577] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.336 [INFO][4577] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682 Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.366 [INFO][4577] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.394 [INFO][4577] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.396 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.396 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:08.462492 containerd[1536]: 2026-01-23 00:59:08.396 [INFO][4577] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" HandleID="k8s-pod-network.cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.464930 containerd[1536]: 2026-01-23 00:59:08.405 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0", GenerateName:"calico-kube-controllers-5f9c4644d-", Namespace:"calico-system", SelfLink:"", UID:"a4c1f677-ec37-4544-9262-69c2ea18781d", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9c4644d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"calico-kube-controllers-5f9c4644d-vck6k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b8d85fb5c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:08.464930 containerd[1536]: 2026-01-23 00:59:08.405 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.464930 containerd[1536]: 2026-01-23 00:59:08.406 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b8d85fb5c8 ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.464930 containerd[1536]: 2026-01-23 00:59:08.422 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.464930 containerd[1536]: 2026-01-23 00:59:08.423 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0", GenerateName:"calico-kube-controllers-5f9c4644d-", Namespace:"calico-system", SelfLink:"", UID:"a4c1f677-ec37-4544-9262-69c2ea18781d", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9c4644d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682", Pod:"calico-kube-controllers-5f9c4644d-vck6k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b8d85fb5c8", MAC:"5a:52:ae:1f:a3:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:08.464930 containerd[1536]: 2026-01-23 00:59:08.451 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" Namespace="calico-system" Pod="calico-kube-controllers-5f9c4644d-vck6k" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-calico--kube--controllers--5f9c4644d--vck6k-eth0" Jan 23 00:59:08.482807 systemd[1]: Started cri-containerd-0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e.scope - libcontainer container 0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e. Jan 23 00:59:08.536527 containerd[1536]: time="2026-01-23T00:59:08.536454037Z" level=info msg="connecting to shim cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682" address="unix:///run/containerd/s/d83848ceb2fe2ebd6b000b0066f4faef9a055d4ad3ac3a29f2aa2fe8d5ec5e5f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:08.580693 systemd-networkd[1427]: caliae467104c38: Link UP Jan 23 00:59:08.583340 systemd-networkd[1427]: caliae467104c38: Gained carrier Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.112 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0 goldmane-7c778bb748- calico-system c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82 823 0 2026-01-23 00:58:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512 goldmane-7c778bb748-cfwbw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliae467104c38 [] [] }} ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.113 [INFO][4554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.222 [INFO][4582] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" HandleID="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.226 [INFO][4582] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" HandleID="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", "pod":"goldmane-7c778bb748-cfwbw", "timestamp":"2026-01-23 00:59:08.222934974 +0000 UTC"}, Hostname:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.226 [INFO][4582] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.397 [INFO][4582] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.397 [INFO][4582] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512' Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.428 [INFO][4582] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.440 [INFO][4582] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.460 [INFO][4582] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.485 [INFO][4582] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.497 [INFO][4582] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.497 [INFO][4582] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.501 [INFO][4582] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825 Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.530 [INFO][4582] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.557 [INFO][4582] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.558 [INFO][4582] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" host="ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512" Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.561 [INFO][4582] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:08.635877 containerd[1536]: 2026-01-23 00:59:08.561 [INFO][4582] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" HandleID="k8s-pod-network.5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Workload="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.639669 containerd[1536]: 2026-01-23 00:59:08.573 [INFO][4554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"", Pod:"goldmane-7c778bb748-cfwbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae467104c38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:08.639669 containerd[1536]: 2026-01-23 00:59:08.573 [INFO][4554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.639669 containerd[1536]: 2026-01-23 00:59:08.573 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae467104c38 ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.639669 containerd[1536]: 2026-01-23 00:59:08.587 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.639669 containerd[1536]: 2026-01-23 00:59:08.588 [INFO][4554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-nightly-20260122-2100-3b2fc14d4008dbddb512", ContainerID:"5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825", Pod:"goldmane-7c778bb748-cfwbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae467104c38", MAC:"c2:26:c7:cd:fb:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:08.639669 containerd[1536]: 2026-01-23 00:59:08.633 [INFO][4554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" Namespace="calico-system" Pod="goldmane-7c778bb748-cfwbw" WorkloadEndpoint="ci--4459--2--2--nightly--20260122--2100--3b2fc14d4008dbddb512-k8s-goldmane--7c778bb748--cfwbw-eth0" Jan 23 00:59:08.637850 systemd[1]: Started cri-containerd-cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682.scope - libcontainer container cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682. Jan 23 00:59:08.641851 systemd-networkd[1427]: calid7d90a4c85f: Gained IPv6LL Jan 23 00:59:08.737414 containerd[1536]: time="2026-01-23T00:59:08.737088938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5mh2p,Uid:8d4a98f1-53d3-4f88-92ce-8fea82a35989,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e\"" Jan 23 00:59:08.753110 containerd[1536]: time="2026-01-23T00:59:08.753060148Z" level=info msg="CreateContainer within sandbox \"0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:59:08.760965 containerd[1536]: time="2026-01-23T00:59:08.760866264Z" level=info msg="connecting to shim 5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825" address="unix:///run/containerd/s/28e7f9fd80fafdfb38d4a406fa2a2e2f7b209439479afb91d361c3aaa8d85c1f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:08.774130 containerd[1536]: time="2026-01-23T00:59:08.773766862Z" level=info msg="Container 29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:08.787078 containerd[1536]: time="2026-01-23T00:59:08.787031182Z" level=info msg="CreateContainer within sandbox \"0d4b8708f590da5f05a616812ef7c96ac8af01939891a52815ecebf6d7c3f91e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b\"" Jan 23 00:59:08.789822 containerd[1536]: time="2026-01-23T00:59:08.789641142Z" level=info msg="StartContainer for \"29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b\"" Jan 23 00:59:08.791962 containerd[1536]: time="2026-01-23T00:59:08.791923338Z" level=info msg="connecting to shim 29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b" address="unix:///run/containerd/s/7e452cefefd3ec44273d82ac00792d3a15a9a250f293fb16f0472ee906dcc84a" protocol=ttrpc version=3 Jan 23 00:59:08.846686 systemd[1]: Started cri-containerd-29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b.scope - libcontainer container 29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b. Jan 23 00:59:08.853515 systemd[1]: Started cri-containerd-5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825.scope - libcontainer container 5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825. Jan 23 00:59:08.949125 containerd[1536]: time="2026-01-23T00:59:08.949078332Z" level=info msg="StartContainer for \"29a4fad238eb2d5a05b21c6234e963f15edd4f847eb0af4e92320e5bd34fe30b\" returns successfully" Jan 23 00:59:09.073649 containerd[1536]: time="2026-01-23T00:59:09.073586630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c4644d-vck6k,Uid:a4c1f677-ec37-4544-9262-69c2ea18781d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc04680c0ccfcef36cc97cd25d103a23f125292f66f30f564164c6cc7fde6682\"" Jan 23 00:59:09.079719 containerd[1536]: time="2026-01-23T00:59:09.079662638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:59:09.090525 systemd-networkd[1427]: calia3f1bfefafe: Gained IPv6LL Jan 23 00:59:09.095931 containerd[1536]: time="2026-01-23T00:59:09.095463775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cfwbw,Uid:c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bc75d8d2365a4aa0a1031a62b33ebf44f2e8c5782e07c879a916b279dac0825\"" Jan 23 00:59:09.251603 containerd[1536]: time="2026-01-23T00:59:09.251530301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:09.253022 containerd[1536]: time="2026-01-23T00:59:09.252965350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:59:09.253218 containerd[1536]: time="2026-01-23T00:59:09.253082563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:09.253406 kubelet[2813]: E0123 00:59:09.253331 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:09.253933 kubelet[2813]: E0123 00:59:09.253420 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:09.253933 kubelet[2813]: E0123 00:59:09.253629 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c4644d-vck6k_calico-system(a4c1f677-ec37-4544-9262-69c2ea18781d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:09.253933 kubelet[2813]: E0123 00:59:09.253687 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:59:09.254801 containerd[1536]: time="2026-01-23T00:59:09.254719747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:59:09.274825 kubelet[2813]: E0123 00:59:09.274720 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:59:09.276906 kubelet[2813]: E0123 00:59:09.276199 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:59:09.277154 kubelet[2813]: E0123 00:59:09.276709 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:59:09.288768 kubelet[2813]: I0123 00:59:09.288545 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5mh2p" podStartSLOduration=47.288523786 podStartE2EDuration="47.288523786s" podCreationTimestamp="2026-01-23 00:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:59:09.288111085 +0000 UTC m=+52.585797785" watchObservedRunningTime="2026-01-23 00:59:09.288523786 +0000 UTC m=+52.586210486" Jan 23 00:59:09.428172 containerd[1536]: time="2026-01-23T00:59:09.427108285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:09.429068 containerd[1536]: time="2026-01-23T00:59:09.429002911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:59:09.429211 containerd[1536]: time="2026-01-23T00:59:09.429123087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:09.429577 kubelet[2813]: E0123 00:59:09.429510 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:09.429577 kubelet[2813]: E0123 00:59:09.429572 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:09.429905 kubelet[2813]: E0123 00:59:09.429704 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cfwbw_calico-system(c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:09.429905 kubelet[2813]: E0123 00:59:09.429761 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 00:59:09.921472 systemd-networkd[1427]: cali0b8d85fb5c8: Gained IPv6LL Jan 23 00:59:09.985573 systemd-networkd[1427]: cali4a25ee8ba94: Gained IPv6LL Jan 23 00:59:10.279480 kubelet[2813]: E0123 00:59:10.279399 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:59:10.280558 kubelet[2813]: E0123 00:59:10.280060 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 00:59:10.561550 systemd-networkd[1427]: caliae467104c38: Gained IPv6LL Jan 23 00:59:12.819472 ntpd[1660]: Listen normally on 6 vxlan.calico 192.168.88.128:123 Jan 23 00:59:12.819564 ntpd[1660]: Listen normally on 7 calibe080d572a5 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 6 vxlan.calico 192.168.88.128:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 7 calibe080d572a5 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 8 vxlan.calico [fe80::6498:b6ff:fe87:6246%5]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 9 cali9cd5e3cb4a6 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 10 cali1ad2122e7fe [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 11 calia3f1bfefafe [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 12 calid7d90a4c85f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 13 cali4a25ee8ba94 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 14 cali0b8d85fb5c8 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 00:59:12.820132 ntpd[1660]: 23 Jan 00:59:12 ntpd[1660]: Listen normally on 15 caliae467104c38 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 00:59:12.819609 ntpd[1660]: Listen normally on 8 vxlan.calico [fe80::6498:b6ff:fe87:6246%5]:123 Jan 23 00:59:12.819654 ntpd[1660]: Listen normally on 9 cali9cd5e3cb4a6 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 00:59:12.819695 ntpd[1660]: Listen normally on 10 cali1ad2122e7fe [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 00:59:12.819737 ntpd[1660]: Listen normally on 11 calia3f1bfefafe [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 00:59:12.819791 ntpd[1660]: Listen normally on 12 calid7d90a4c85f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 00:59:12.819832 ntpd[1660]: Listen normally on 13 cali4a25ee8ba94 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 00:59:12.819872 ntpd[1660]: Listen normally on 14 cali0b8d85fb5c8 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 00:59:12.819914 ntpd[1660]: Listen normally on 15 caliae467104c38 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 00:59:17.915806 containerd[1536]: time="2026-01-23T00:59:17.915518872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:59:18.084820 containerd[1536]: time="2026-01-23T00:59:18.084747797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:18.086439 containerd[1536]: time="2026-01-23T00:59:18.086375291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:59:18.086439 containerd[1536]: time="2026-01-23T00:59:18.086387866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:59:18.086729 kubelet[2813]: E0123 00:59:18.086689 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:18.087224 kubelet[2813]: E0123 00:59:18.086749 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:18.087224 kubelet[2813]: E0123 00:59:18.087189 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7956844f94-rcgtz_calico-system(c53a342e-2053-4bcb-9132-f1ca510f3ccf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:18.089843 containerd[1536]: time="2026-01-23T00:59:18.089773178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:59:18.244555 containerd[1536]: time="2026-01-23T00:59:18.244380460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:18.245954 containerd[1536]: time="2026-01-23T00:59:18.245899724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:59:18.246116 containerd[1536]: time="2026-01-23T00:59:18.246008090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:18.246433 kubelet[2813]: E0123 00:59:18.246369 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:18.246556 kubelet[2813]: E0123 00:59:18.246429 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:18.246616 kubelet[2813]: E0123 00:59:18.246572 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7956844f94-rcgtz_calico-system(c53a342e-2053-4bcb-9132-f1ca510f3ccf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:18.246673 kubelet[2813]: E0123 00:59:18.246635 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 00:59:20.916757 containerd[1536]: time="2026-01-23T00:59:20.916681207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:59:21.079460 containerd[1536]: time="2026-01-23T00:59:21.079389773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:21.081140 containerd[1536]: time="2026-01-23T00:59:21.081066742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:59:21.081337 containerd[1536]: time="2026-01-23T00:59:21.081186485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:21.081815 kubelet[2813]: E0123 00:59:21.081432 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:21.081815 kubelet[2813]: E0123 00:59:21.081487 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:21.081815 kubelet[2813]: E0123 00:59:21.081703 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cfwbw_calico-system(c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:21.081815 kubelet[2813]: E0123 00:59:21.081761 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 00:59:21.083785 containerd[1536]: time="2026-01-23T00:59:21.082703084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:59:21.246337 containerd[1536]: time="2026-01-23T00:59:21.246140268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:21.248418 containerd[1536]: time="2026-01-23T00:59:21.248354929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:59:21.248609 containerd[1536]: time="2026-01-23T00:59:21.248477062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:21.248852 kubelet[2813]: E0123 00:59:21.248800 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:21.249188 kubelet[2813]: E0123 00:59:21.248865 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:21.249418 kubelet[2813]: E0123 00:59:21.249158 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c4644d-vck6k_calico-system(a4c1f677-ec37-4544-9262-69c2ea18781d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:21.249823 kubelet[2813]: E0123 00:59:21.249466 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:59:21.251197 containerd[1536]: time="2026-01-23T00:59:21.250885457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:21.449776 containerd[1536]: time="2026-01-23T00:59:21.449695521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:21.451419 containerd[1536]: time="2026-01-23T00:59:21.451348156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:21.451419 containerd[1536]: time="2026-01-23T00:59:21.451372695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:21.451792 kubelet[2813]: E0123 00:59:21.451728 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:21.451792 kubelet[2813]: E0123 00:59:21.451803 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:21.452092 kubelet[2813]: E0123 00:59:21.451916 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fc7f6fd7-2br74_calico-apiserver(1226503f-d3f5-44b3-bde1-f270917649eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:21.452092 kubelet[2813]: E0123 00:59:21.451971 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:59:21.914637 containerd[1536]: time="2026-01-23T00:59:21.914350533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:22.098281 containerd[1536]: time="2026-01-23T00:59:22.097391643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:22.100290 containerd[1536]: time="2026-01-23T00:59:22.099562711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:22.100290 containerd[1536]: time="2026-01-23T00:59:22.099692955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:22.100465 kubelet[2813]: E0123 00:59:22.100126 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:22.100465 kubelet[2813]: E0123 00:59:22.100307 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:22.100465 kubelet[2813]: E0123 00:59:22.100414 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fc7f6fd7-dhdhc_calico-apiserver(68595a98-b8d6-439d-8c23-9d5c1a8e3d45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:22.100979 kubelet[2813]: E0123 00:59:22.100461 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:59:22.917210 containerd[1536]: time="2026-01-23T00:59:22.916417165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:59:23.073837 containerd[1536]: time="2026-01-23T00:59:23.073784147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:23.075536 containerd[1536]: time="2026-01-23T00:59:23.075399784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:59:23.075536 containerd[1536]: time="2026-01-23T00:59:23.075447807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:59:23.075988 kubelet[2813]: E0123 00:59:23.075933 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:23.076322 kubelet[2813]: E0123 00:59:23.075998 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:23.076322 kubelet[2813]: E0123 00:59:23.076109 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:23.077858 containerd[1536]: time="2026-01-23T00:59:23.077799673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:59:23.238335 containerd[1536]: time="2026-01-23T00:59:23.238143630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:23.239945 containerd[1536]: time="2026-01-23T00:59:23.239834860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:59:23.240151 containerd[1536]: time="2026-01-23T00:59:23.239856162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:59:23.240294 kubelet[2813]: E0123 00:59:23.240223 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:23.240294 kubelet[2813]: E0123 00:59:23.240307 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:23.241061 kubelet[2813]: E0123 00:59:23.240416 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:23.241061 kubelet[2813]: E0123 00:59:23.240483 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:59:23.611543 systemd[1]: Started sshd@9-10.128.0.101:22-4.153.228.146:56456.service - OpenSSH per-connection server daemon (4.153.228.146:56456). Jan 23 00:59:23.853385 sshd[4832]: Accepted publickey for core from 4.153.228.146 port 56456 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:23.855171 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:23.862362 systemd-logind[1512]: New session 10 of user core. Jan 23 00:59:23.868472 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:59:24.111560 sshd[4838]: Connection closed by 4.153.228.146 port 56456 Jan 23 00:59:24.112454 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:24.118955 systemd[1]: sshd@9-10.128.0.101:22-4.153.228.146:56456.service: Deactivated successfully. Jan 23 00:59:24.122782 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:59:24.124081 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:59:24.126710 systemd-logind[1512]: Removed session 10. Jan 23 00:59:29.155430 systemd[1]: Started sshd@10-10.128.0.101:22-4.153.228.146:60020.service - OpenSSH per-connection server daemon (4.153.228.146:60020). Jan 23 00:59:29.396086 sshd[4853]: Accepted publickey for core from 4.153.228.146 port 60020 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:29.397677 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:29.404893 systemd-logind[1512]: New session 11 of user core. Jan 23 00:59:29.412465 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:59:29.639819 sshd[4856]: Connection closed by 4.153.228.146 port 60020 Jan 23 00:59:29.641557 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:29.647233 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:59:29.648450 systemd[1]: sshd@10-10.128.0.101:22-4.153.228.146:60020.service: Deactivated successfully. Jan 23 00:59:29.652474 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:59:29.655683 systemd-logind[1512]: Removed session 11. Jan 23 00:59:31.915615 kubelet[2813]: E0123 00:59:31.915528 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 00:59:33.915307 kubelet[2813]: E0123 00:59:33.915163 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:59:33.915307 kubelet[2813]: E0123 00:59:33.915155 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 00:59:34.682412 systemd[1]: Started sshd@11-10.128.0.101:22-4.153.228.146:60964.service - OpenSSH per-connection server daemon (4.153.228.146:60964). Jan 23 00:59:34.918019 kubelet[2813]: E0123 00:59:34.917825 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:59:34.931213 sshd[4918]: Accepted publickey for core from 4.153.228.146 port 60964 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:34.932915 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:34.944897 systemd-logind[1512]: New session 12 of user core. Jan 23 00:59:34.954462 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:59:35.182127 sshd[4921]: Connection closed by 4.153.228.146 port 60964 Jan 23 00:59:35.183068 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:35.189281 systemd[1]: sshd@11-10.128.0.101:22-4.153.228.146:60964.service: Deactivated successfully. Jan 23 00:59:35.192738 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:59:35.194510 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:59:35.197093 systemd-logind[1512]: Removed session 12. Jan 23 00:59:35.915731 kubelet[2813]: E0123 00:59:35.915672 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:59:36.918546 kubelet[2813]: E0123 00:59:36.918482 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:59:40.240642 systemd[1]: Started sshd@12-10.128.0.101:22-4.153.228.146:60970.service - OpenSSH per-connection server daemon (4.153.228.146:60970). Jan 23 00:59:40.510603 sshd[4939]: Accepted publickey for core from 4.153.228.146 port 60970 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:40.512358 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:40.520347 systemd-logind[1512]: New session 13 of user core. Jan 23 00:59:40.525479 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:59:40.779008 sshd[4942]: Connection closed by 4.153.228.146 port 60970 Jan 23 00:59:40.780047 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:40.787096 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:59:40.787881 systemd[1]: sshd@12-10.128.0.101:22-4.153.228.146:60970.service: Deactivated successfully. Jan 23 00:59:40.790786 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:59:40.793992 systemd-logind[1512]: Removed session 13. Jan 23 00:59:40.821396 systemd[1]: Started sshd@13-10.128.0.101:22-4.153.228.146:60972.service - OpenSSH per-connection server daemon (4.153.228.146:60972). Jan 23 00:59:41.061991 sshd[4955]: Accepted publickey for core from 4.153.228.146 port 60972 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:41.063620 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:41.070410 systemd-logind[1512]: New session 14 of user core. Jan 23 00:59:41.078470 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:59:41.348974 sshd[4958]: Connection closed by 4.153.228.146 port 60972 Jan 23 00:59:41.350176 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:41.358847 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:59:41.359956 systemd[1]: sshd@13-10.128.0.101:22-4.153.228.146:60972.service: Deactivated successfully. Jan 23 00:59:41.367786 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:59:41.373886 systemd-logind[1512]: Removed session 14. Jan 23 00:59:41.393633 systemd[1]: Started sshd@14-10.128.0.101:22-4.153.228.146:60988.service - OpenSSH per-connection server daemon (4.153.228.146:60988). Jan 23 00:59:41.637440 sshd[4968]: Accepted publickey for core from 4.153.228.146 port 60988 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:41.640150 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:41.650938 systemd-logind[1512]: New session 15 of user core. Jan 23 00:59:41.656524 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:59:41.970191 sshd[4971]: Connection closed by 4.153.228.146 port 60988 Jan 23 00:59:41.972527 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:41.979758 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:59:41.982355 systemd[1]: sshd@14-10.128.0.101:22-4.153.228.146:60988.service: Deactivated successfully. Jan 23 00:59:41.987872 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:59:41.993173 systemd-logind[1512]: Removed session 15. Jan 23 00:59:43.915962 containerd[1536]: time="2026-01-23T00:59:43.915491637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:59:44.070233 containerd[1536]: time="2026-01-23T00:59:44.070163390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:44.071856 containerd[1536]: time="2026-01-23T00:59:44.071701701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:59:44.071856 containerd[1536]: time="2026-01-23T00:59:44.071761937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:59:44.072137 kubelet[2813]: E0123 00:59:44.072053 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:44.072137 kubelet[2813]: E0123 00:59:44.072128 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:44.072754 kubelet[2813]: E0123 00:59:44.072236 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7956844f94-rcgtz_calico-system(c53a342e-2053-4bcb-9132-f1ca510f3ccf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:44.074649 containerd[1536]: time="2026-01-23T00:59:44.074599258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:59:44.233672 containerd[1536]: time="2026-01-23T00:59:44.233238500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:44.234972 containerd[1536]: time="2026-01-23T00:59:44.234897846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:59:44.235317 containerd[1536]: time="2026-01-23T00:59:44.234931539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:44.235399 kubelet[2813]: E0123 00:59:44.235293 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:44.235399 kubelet[2813]: E0123 00:59:44.235374 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:44.235575 kubelet[2813]: E0123 00:59:44.235504 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7956844f94-rcgtz_calico-system(c53a342e-2053-4bcb-9132-f1ca510f3ccf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:44.235844 kubelet[2813]: E0123 00:59:44.235641 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 00:59:44.915781 containerd[1536]: time="2026-01-23T00:59:44.915002347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:45.068643 containerd[1536]: time="2026-01-23T00:59:45.068568817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:45.070340 containerd[1536]: time="2026-01-23T00:59:45.070230012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:45.070340 containerd[1536]: time="2026-01-23T00:59:45.070300653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:45.070664 kubelet[2813]: E0123 00:59:45.070535 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:45.070664 kubelet[2813]: E0123 00:59:45.070590 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:45.070799 kubelet[2813]: E0123 00:59:45.070696 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fc7f6fd7-2br74_calico-apiserver(1226503f-d3f5-44b3-bde1-f270917649eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:45.070799 kubelet[2813]: E0123 00:59:45.070757 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 00:59:47.013650 systemd[1]: Started sshd@15-10.128.0.101:22-4.153.228.146:39040.service - OpenSSH per-connection server daemon (4.153.228.146:39040). Jan 23 00:59:47.254841 sshd[4992]: Accepted publickey for core from 4.153.228.146 port 39040 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:47.256594 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:47.263455 systemd-logind[1512]: New session 16 of user core. Jan 23 00:59:47.268483 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:59:47.530445 sshd[4995]: Connection closed by 4.153.228.146 port 39040 Jan 23 00:59:47.530295 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:47.537613 systemd[1]: sshd@15-10.128.0.101:22-4.153.228.146:39040.service: Deactivated successfully. Jan 23 00:59:47.540546 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:59:47.542214 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:59:47.544796 systemd-logind[1512]: Removed session 16. Jan 23 00:59:47.916692 containerd[1536]: time="2026-01-23T00:59:47.916079105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:59:48.071410 containerd[1536]: time="2026-01-23T00:59:48.071337838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:48.073134 containerd[1536]: time="2026-01-23T00:59:48.073077265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:59:48.073332 containerd[1536]: time="2026-01-23T00:59:48.073197838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:59:48.073585 kubelet[2813]: E0123 00:59:48.073442 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:48.073585 kubelet[2813]: E0123 00:59:48.073513 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:48.074167 kubelet[2813]: E0123 00:59:48.073620 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:48.077083 containerd[1536]: time="2026-01-23T00:59:48.076497317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:59:48.229617 containerd[1536]: time="2026-01-23T00:59:48.229447526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:48.231177 containerd[1536]: time="2026-01-23T00:59:48.231117434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:59:48.231443 containerd[1536]: time="2026-01-23T00:59:48.231136961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:59:48.231567 kubelet[2813]: E0123 00:59:48.231445 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:48.231567 kubelet[2813]: E0123 00:59:48.231506 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:48.231822 kubelet[2813]: E0123 00:59:48.231613 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvsx8_calico-system(0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:48.231822 kubelet[2813]: E0123 00:59:48.231676 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 00:59:48.917415 containerd[1536]: time="2026-01-23T00:59:48.917250237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:49.084838 containerd[1536]: time="2026-01-23T00:59:49.084776592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:49.086321 containerd[1536]: time="2026-01-23T00:59:49.086211836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:49.086321 containerd[1536]: time="2026-01-23T00:59:49.086278434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:49.086772 kubelet[2813]: E0123 00:59:49.086613 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:49.086772 kubelet[2813]: E0123 00:59:49.086684 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:49.088070 containerd[1536]: time="2026-01-23T00:59:49.087656191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:59:49.088352 kubelet[2813]: E0123 00:59:49.087090 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fc7f6fd7-dhdhc_calico-apiserver(68595a98-b8d6-439d-8c23-9d5c1a8e3d45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:49.088352 kubelet[2813]: E0123 00:59:49.087910 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 00:59:49.248548 containerd[1536]: time="2026-01-23T00:59:49.247635248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:49.250009 containerd[1536]: time="2026-01-23T00:59:49.249938272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:59:49.250160 containerd[1536]: time="2026-01-23T00:59:49.250053604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:49.250406 kubelet[2813]: E0123 00:59:49.250298 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:49.250406 kubelet[2813]: E0123 00:59:49.250390 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:49.250682 kubelet[2813]: E0123 00:59:49.250511 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cfwbw_calico-system(c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:49.250682 kubelet[2813]: E0123 00:59:49.250560 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 00:59:50.916686 containerd[1536]: time="2026-01-23T00:59:50.916630988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:59:51.072399 containerd[1536]: time="2026-01-23T00:59:51.072325340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:51.074083 containerd[1536]: time="2026-01-23T00:59:51.074013789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:59:51.074237 containerd[1536]: time="2026-01-23T00:59:51.074034091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:51.074567 kubelet[2813]: E0123 00:59:51.074492 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:51.074567 kubelet[2813]: E0123 00:59:51.074560 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:51.075625 kubelet[2813]: E0123 00:59:51.074683 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c4644d-vck6k_calico-system(a4c1f677-ec37-4544-9262-69c2ea18781d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:51.075625 kubelet[2813]: E0123 00:59:51.074734 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 00:59:52.571397 systemd[1]: Started sshd@16-10.128.0.101:22-4.153.228.146:39046.service - OpenSSH per-connection server daemon (4.153.228.146:39046). Jan 23 00:59:52.810042 sshd[5010]: Accepted publickey for core from 4.153.228.146 port 39046 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:52.811797 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:52.818553 systemd-logind[1512]: New session 17 of user core. Jan 23 00:59:52.826462 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:59:53.058624 sshd[5013]: Connection closed by 4.153.228.146 port 39046 Jan 23 00:59:53.059612 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:53.066743 systemd[1]: sshd@16-10.128.0.101:22-4.153.228.146:39046.service: Deactivated successfully. Jan 23 00:59:53.069326 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:59:53.071075 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:59:53.073148 systemd-logind[1512]: Removed session 17. Jan 23 00:59:54.918024 kubelet[2813]: E0123 00:59:54.917241 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 00:59:58.102673 systemd[1]: Started sshd@17-10.128.0.101:22-4.153.228.146:49914.service - OpenSSH per-connection server daemon (4.153.228.146:49914). Jan 23 00:59:58.334041 sshd[5027]: Accepted publickey for core from 4.153.228.146 port 49914 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 00:59:58.336059 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:58.343650 systemd-logind[1512]: New session 18 of user core. Jan 23 00:59:58.352488 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:59:58.587552 sshd[5030]: Connection closed by 4.153.228.146 port 49914 Jan 23 00:59:58.588569 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:58.593938 systemd[1]: sshd@17-10.128.0.101:22-4.153.228.146:49914.service: Deactivated successfully. Jan 23 00:59:58.597228 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:59:58.600737 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:59:58.602545 systemd-logind[1512]: Removed session 18. Jan 23 00:59:58.915490 kubelet[2813]: E0123 00:59:58.915329 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 01:00:00.915389 kubelet[2813]: E0123 01:00:00.915209 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 01:00:02.916195 kubelet[2813]: E0123 01:00:02.916121 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 01:00:03.635647 systemd[1]: Started sshd@18-10.128.0.101:22-4.153.228.146:49918.service - OpenSSH per-connection server daemon (4.153.228.146:49918). Jan 23 01:00:03.888966 sshd[5066]: Accepted publickey for core from 4.153.228.146 port 49918 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:03.890838 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:03.900293 systemd-logind[1512]: New session 19 of user core. Jan 23 01:00:03.905497 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:00:03.915408 kubelet[2813]: E0123 01:00:03.915234 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 01:00:03.918429 kubelet[2813]: E0123 01:00:03.918307 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 01:00:04.138473 sshd[5069]: Connection closed by 4.153.228.146 port 49918 Jan 23 01:00:04.139758 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:04.145293 systemd[1]: sshd@18-10.128.0.101:22-4.153.228.146:49918.service: Deactivated successfully. Jan 23 01:00:04.146341 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:00:04.149085 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:00:04.153016 systemd-logind[1512]: Removed session 19. Jan 23 01:00:04.183404 systemd[1]: Started sshd@19-10.128.0.101:22-4.153.228.146:49928.service - OpenSSH per-connection server daemon (4.153.228.146:49928). Jan 23 01:00:04.420949 sshd[5081]: Accepted publickey for core from 4.153.228.146 port 49928 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:04.423772 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:04.431356 systemd-logind[1512]: New session 20 of user core. Jan 23 01:00:04.438491 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:00:04.752074 sshd[5084]: Connection closed by 4.153.228.146 port 49928 Jan 23 01:00:04.752880 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:04.759649 systemd[1]: sshd@19-10.128.0.101:22-4.153.228.146:49928.service: Deactivated successfully. Jan 23 01:00:04.762863 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:00:04.764993 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:00:04.767611 systemd-logind[1512]: Removed session 20. Jan 23 01:00:04.796050 systemd[1]: Started sshd@20-10.128.0.101:22-4.153.228.146:39714.service - OpenSSH per-connection server daemon (4.153.228.146:39714). Jan 23 01:00:05.030583 sshd[5093]: Accepted publickey for core from 4.153.228.146 port 39714 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:05.032411 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:05.038779 systemd-logind[1512]: New session 21 of user core. Jan 23 01:00:05.045466 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:00:05.897251 sshd[5096]: Connection closed by 4.153.228.146 port 39714 Jan 23 01:00:05.898558 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:05.910569 systemd[1]: sshd@20-10.128.0.101:22-4.153.228.146:39714.service: Deactivated successfully. Jan 23 01:00:05.912143 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:00:05.918121 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:00:05.919169 kubelet[2813]: E0123 01:00:05.919112 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 01:00:05.949789 systemd-logind[1512]: Removed session 21. Jan 23 01:00:05.952829 systemd[1]: Started sshd@21-10.128.0.101:22-4.153.228.146:39718.service - OpenSSH per-connection server daemon (4.153.228.146:39718). Jan 23 01:00:06.196628 sshd[5111]: Accepted publickey for core from 4.153.228.146 port 39718 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:06.198853 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:06.206027 systemd-logind[1512]: New session 22 of user core. Jan 23 01:00:06.211516 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:00:06.608250 sshd[5114]: Connection closed by 4.153.228.146 port 39718 Jan 23 01:00:06.609592 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:06.616883 systemd[1]: sshd@21-10.128.0.101:22-4.153.228.146:39718.service: Deactivated successfully. Jan 23 01:00:06.620403 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:00:06.621786 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:00:06.624550 systemd-logind[1512]: Removed session 22. Jan 23 01:00:06.654830 systemd[1]: Started sshd@22-10.128.0.101:22-4.153.228.146:39728.service - OpenSSH per-connection server daemon (4.153.228.146:39728). Jan 23 01:00:06.904401 sshd[5124]: Accepted publickey for core from 4.153.228.146 port 39728 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:06.907608 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:06.923350 systemd-logind[1512]: New session 23 of user core. Jan 23 01:00:06.930937 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:00:07.164744 sshd[5129]: Connection closed by 4.153.228.146 port 39728 Jan 23 01:00:07.167683 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:07.172826 systemd[1]: sshd@22-10.128.0.101:22-4.153.228.146:39728.service: Deactivated successfully. Jan 23 01:00:07.176054 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:00:07.178244 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:00:07.181118 systemd-logind[1512]: Removed session 23. Jan 23 01:00:10.917295 kubelet[2813]: E0123 01:00:10.915378 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb" Jan 23 01:00:12.210446 systemd[1]: Started sshd@23-10.128.0.101:22-4.153.228.146:39742.service - OpenSSH per-connection server daemon (4.153.228.146:39742). Jan 23 01:00:12.449320 sshd[5144]: Accepted publickey for core from 4.153.228.146 port 39742 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:12.452631 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:12.458871 systemd-logind[1512]: New session 24 of user core. Jan 23 01:00:12.465508 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:00:12.702431 sshd[5147]: Connection closed by 4.153.228.146 port 39742 Jan 23 01:00:12.705551 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:12.712471 systemd[1]: sshd@23-10.128.0.101:22-4.153.228.146:39742.service: Deactivated successfully. Jan 23 01:00:12.715930 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:00:12.717213 systemd-logind[1512]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:00:12.719610 systemd-logind[1512]: Removed session 24. Jan 23 01:00:13.914603 kubelet[2813]: E0123 01:00:13.914546 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-dhdhc" podUID="68595a98-b8d6-439d-8c23-9d5c1a8e3d45" Jan 23 01:00:14.917071 kubelet[2813]: E0123 01:00:14.916066 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cfwbw" podUID="c6fc1adc-08eb-414c-90b1-3cc3c5ed0e82" Jan 23 01:00:16.916105 kubelet[2813]: E0123 01:00:16.916019 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7956844f94-rcgtz" podUID="c53a342e-2053-4bcb-9132-f1ca510f3ccf" Jan 23 01:00:17.746817 systemd[1]: Started sshd@24-10.128.0.101:22-4.153.228.146:60784.service - OpenSSH per-connection server daemon (4.153.228.146:60784). Jan 23 01:00:17.916715 kubelet[2813]: E0123 01:00:17.916632 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c4644d-vck6k" podUID="a4c1f677-ec37-4544-9262-69c2ea18781d" Jan 23 01:00:17.989946 sshd[5163]: Accepted publickey for core from 4.153.228.146 port 60784 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:17.991607 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:18.000791 systemd-logind[1512]: New session 25 of user core. Jan 23 01:00:18.005600 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:00:18.337157 sshd[5166]: Connection closed by 4.153.228.146 port 60784 Jan 23 01:00:18.338315 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:18.346757 systemd[1]: sshd@24-10.128.0.101:22-4.153.228.146:60784.service: Deactivated successfully. Jan 23 01:00:18.351828 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:00:18.359617 systemd-logind[1512]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:00:18.362643 systemd-logind[1512]: Removed session 25. Jan 23 01:00:18.917932 kubelet[2813]: E0123 01:00:18.917807 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvsx8" podUID="0fe4dc2f-0955-4a3d-81e6-f5a1e1ac845e" Jan 23 01:00:23.384615 systemd[1]: Started sshd@25-10.128.0.101:22-4.153.228.146:60800.service - OpenSSH per-connection server daemon (4.153.228.146:60800). Jan 23 01:00:23.651606 sshd[5182]: Accepted publickey for core from 4.153.228.146 port 60800 ssh2: RSA SHA256:w76R5LQ5ytJtvAgHyZJoWa3fuScG5UB8S6J2p1oENss Jan 23 01:00:23.654895 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:23.669845 systemd-logind[1512]: New session 26 of user core. Jan 23 01:00:23.677530 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:00:23.963857 sshd[5189]: Connection closed by 4.153.228.146 port 60800 Jan 23 01:00:23.964695 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:23.974486 systemd[1]: sshd@25-10.128.0.101:22-4.153.228.146:60800.service: Deactivated successfully. Jan 23 01:00:23.978575 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:00:23.982827 systemd-logind[1512]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:00:23.987321 systemd-logind[1512]: Removed session 26. Jan 23 01:00:24.924458 kubelet[2813]: E0123 01:00:24.923785 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fc7f6fd7-2br74" podUID="1226503f-d3f5-44b3-bde1-f270917649eb"