Oct 30 00:08:22.198814 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:07:32 -00 2025 Oct 30 00:08:22.198867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:08:22.198890 kernel: BIOS-provided physical RAM map: Oct 30 00:08:22.198905 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Oct 30 00:08:22.198921 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Oct 30 00:08:22.198935 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Oct 30 00:08:22.198953 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Oct 30 00:08:22.198968 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Oct 30 00:08:22.198994 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Oct 30 00:08:22.199010 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Oct 30 00:08:22.199024 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Oct 30 00:08:22.199042 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Oct 30 00:08:22.199081 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Oct 30 00:08:22.199107 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Oct 30 00:08:22.199142 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Oct 30 00:08:22.199170 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Oct 30 00:08:22.199197 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Oct 30 00:08:22.199224 kernel: NX (Execute Disable) protection: active Oct 30 00:08:22.199250 kernel: APIC: Static calls initialized Oct 30 00:08:22.199276 kernel: efi: EFI v2.7 by EDK II Oct 30 00:08:22.199302 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Oct 30 00:08:22.199356 kernel: random: crng init done Oct 30 00:08:22.199382 kernel: secureboot: Secure boot disabled Oct 30 00:08:22.199407 kernel: SMBIOS 2.4 present. Oct 30 00:08:22.199432 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Oct 30 00:08:22.199462 kernel: DMI: Memory slots populated: 1/1 Oct 30 00:08:22.199487 kernel: Hypervisor detected: KVM Oct 30 00:08:22.199512 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 30 00:08:22.199537 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 00:08:22.199562 kernel: kvm-clock: using sched offset of 16163792980 cycles Oct 30 00:08:22.199608 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 00:08:22.199635 kernel: tsc: Detected 2299.998 MHz processor Oct 30 00:08:22.199661 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 00:08:22.199687 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 00:08:22.199712 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Oct 30 00:08:22.199736 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Oct 30 00:08:22.199759 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 00:08:22.199785 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 30 00:08:22.199810 kernel: Using GB pages for direct mapping Oct 30 00:08:22.199840 kernel: ACPI: Early table checksum verification disabled Oct 30 00:08:22.199876 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Oct 30 00:08:22.199903 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Oct 30 00:08:22.199938 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Oct 30 00:08:22.199966 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Oct 30 00:08:22.200002 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Oct 30 00:08:22.200030 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Oct 30 00:08:22.200057 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Oct 30 00:08:22.200084 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Oct 30 00:08:22.200112 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Oct 30 00:08:22.200143 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Oct 30 00:08:22.200171 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Oct 30 00:08:22.200198 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Oct 30 00:08:22.200225 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Oct 30 00:08:22.200253 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Oct 30 00:08:22.200280 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Oct 30 00:08:22.200308 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Oct 30 00:08:22.200359 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Oct 30 00:08:22.200379 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Oct 30 00:08:22.200406 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Oct 30 00:08:22.200428 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Oct 30 00:08:22.200449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 30 00:08:22.200474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Oct 30 00:08:22.200500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Oct 30 00:08:22.200518 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Oct 30 00:08:22.200542 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Oct 30 00:08:22.200569 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Oct 30 00:08:22.200597 kernel: Zone ranges: Oct 30 00:08:22.200629 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 00:08:22.200656 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 30 00:08:22.200683 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Oct 30 00:08:22.200711 kernel: Device empty Oct 30 00:08:22.200737 kernel: Movable zone start for each node Oct 30 00:08:22.200757 kernel: Early memory node ranges Oct 30 00:08:22.200785 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Oct 30 00:08:22.200811 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Oct 30 00:08:22.201360 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Oct 30 00:08:22.201395 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Oct 30 00:08:22.201415 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Oct 30 00:08:22.201433 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Oct 30 00:08:22.201451 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Oct 30 00:08:22.201470 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:08:22.201487 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Oct 30 00:08:22.201515 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Oct 30 00:08:22.201545 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Oct 30 00:08:22.201574 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 30 00:08:22.201608 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Oct 30 00:08:22.201636 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 30 00:08:22.201664 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 00:08:22.201692 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 00:08:22.201719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 00:08:22.201743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 00:08:22.201777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 00:08:22.201804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 00:08:22.201832 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 00:08:22.201864 kernel: CPU topo: Max. logical packages: 1 Oct 30 00:08:22.201891 kernel: CPU topo: Max. logical dies: 1 Oct 30 00:08:22.201919 kernel: CPU topo: Max. dies per package: 1 Oct 30 00:08:22.201945 kernel: CPU topo: Max. threads per core: 2 Oct 30 00:08:22.201973 kernel: CPU topo: Num. cores per package: 1 Oct 30 00:08:22.202007 kernel: CPU topo: Num. threads per package: 2 Oct 30 00:08:22.202034 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 30 00:08:22.202063 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 30 00:08:22.202090 kernel: Booting paravirtualized kernel on KVM Oct 30 00:08:22.202118 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 00:08:22.202149 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 30 00:08:22.202177 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 30 00:08:22.202204 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 30 00:08:22.202231 kernel: pcpu-alloc: [0] 0 1 Oct 30 00:08:22.202258 kernel: kvm-guest: PV spinlocks enabled Oct 30 00:08:22.202285 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 00:08:22.202315 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:08:22.203763 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 30 00:08:22.203791 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 00:08:22.203810 kernel: Fallback order for Node 0: 0 Oct 30 00:08:22.203828 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Oct 30 00:08:22.203846 kernel: Policy zone: Normal Oct 30 00:08:22.203865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 00:08:22.203882 kernel: software IO TLB: area num 2. Oct 30 00:08:22.203914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 30 00:08:22.203937 kernel: Kernel/User page tables isolation: enabled Oct 30 00:08:22.203956 kernel: ftrace: allocating 40021 entries in 157 pages Oct 30 00:08:22.203975 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 00:08:22.204004 kernel: Dynamic Preempt: voluntary Oct 30 00:08:22.204024 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 00:08:22.204049 kernel: rcu: RCU event tracing is enabled. Oct 30 00:08:22.204069 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 30 00:08:22.204089 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 00:08:22.204108 kernel: Rude variant of Tasks RCU enabled. Oct 30 00:08:22.204132 kernel: Tracing variant of Tasks RCU enabled. Oct 30 00:08:22.204151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 00:08:22.204170 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 30 00:08:22.204188 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:08:22.204207 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:08:22.204227 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:08:22.204246 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 30 00:08:22.204265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 00:08:22.204284 kernel: Console: colour dummy device 80x25 Oct 30 00:08:22.204309 kernel: printk: legacy console [ttyS0] enabled Oct 30 00:08:22.204344 kernel: ACPI: Core revision 20240827 Oct 30 00:08:22.204365 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 00:08:22.204384 kernel: x2apic enabled Oct 30 00:08:22.204404 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 00:08:22.204423 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Oct 30 00:08:22.204442 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 30 00:08:22.204462 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Oct 30 00:08:22.204481 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Oct 30 00:08:22.204504 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Oct 30 00:08:22.204524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 00:08:22.204543 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 30 00:08:22.204562 kernel: Spectre V2 : Mitigation: IBRS Oct 30 00:08:22.204582 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 00:08:22.204602 kernel: RETBleed: Mitigation: IBRS Oct 30 00:08:22.204621 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 00:08:22.204641 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Oct 30 00:08:22.204661 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 00:08:22.204684 kernel: MDS: Mitigation: Clear CPU buffers Oct 30 00:08:22.204703 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 30 00:08:22.204722 kernel: active return thunk: its_return_thunk Oct 30 00:08:22.204741 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 30 00:08:22.204763 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 00:08:22.204783 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 00:08:22.204802 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 00:08:22.204820 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 00:08:22.204840 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 30 00:08:22.204863 kernel: Freeing SMP alternatives memory: 32K Oct 30 00:08:22.204883 kernel: pid_max: default: 32768 minimum: 301 Oct 30 00:08:22.204902 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 00:08:22.204922 kernel: landlock: Up and running. Oct 30 00:08:22.204941 kernel: SELinux: Initializing. Oct 30 00:08:22.204959 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 00:08:22.204979 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 00:08:22.205004 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Oct 30 00:08:22.205023 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Oct 30 00:08:22.205052 kernel: signal: max sigframe size: 1776 Oct 30 00:08:22.205071 kernel: rcu: Hierarchical SRCU implementation. Oct 30 00:08:22.205090 kernel: rcu: Max phase no-delay instances is 400. Oct 30 00:08:22.205109 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 00:08:22.205129 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 30 00:08:22.205148 kernel: smp: Bringing up secondary CPUs ... Oct 30 00:08:22.205167 kernel: smpboot: x86: Booting SMP configuration: Oct 30 00:08:22.205186 kernel: .... node #0, CPUs: #1 Oct 30 00:08:22.205206 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 30 00:08:22.205230 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 30 00:08:22.205249 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 00:08:22.205270 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 30 00:08:22.205290 kernel: Memory: 7558108K/7860544K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45544K init, 1184K bss, 296860K reserved, 0K cma-reserved) Oct 30 00:08:22.205309 kernel: devtmpfs: initialized Oct 30 00:08:22.207382 kernel: x86/mm: Memory block size: 128MB Oct 30 00:08:22.207414 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Oct 30 00:08:22.207441 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 00:08:22.207477 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 30 00:08:22.207504 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 00:08:22.207528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 00:08:22.207554 kernel: audit: initializing netlink subsys (disabled) Oct 30 00:08:22.207580 kernel: audit: type=2000 audit(1761782896.333:1): state=initialized audit_enabled=0 res=1 Oct 30 00:08:22.207605 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 00:08:22.207633 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 00:08:22.207657 kernel: cpuidle: using governor menu Oct 30 00:08:22.207684 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 00:08:22.207716 kernel: dca service started, version 1.12.1 Oct 30 00:08:22.207740 kernel: PCI: Using configuration type 1 for base access Oct 30 00:08:22.207766 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 00:08:22.207791 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 00:08:22.207817 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 00:08:22.207843 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 00:08:22.207867 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 00:08:22.207893 kernel: ACPI: Added _OSI(Module Device) Oct 30 00:08:22.207918 kernel: ACPI: Added _OSI(Processor Device) Oct 30 00:08:22.207951 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 00:08:22.207977 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 30 00:08:22.208011 kernel: ACPI: Interpreter enabled Oct 30 00:08:22.208038 kernel: ACPI: PM: (supports S0 S3 S5) Oct 30 00:08:22.208062 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 00:08:22.208090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 00:08:22.208115 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 30 00:08:22.208137 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 30 00:08:22.208166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 00:08:22.209564 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 30 00:08:22.209822 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 30 00:08:22.210062 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 30 00:08:22.210086 kernel: PCI host bridge to bus 0000:00 Oct 30 00:08:22.210310 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 00:08:22.211584 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 00:08:22.211783 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 00:08:22.211971 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Oct 30 00:08:22.212170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 00:08:22.212434 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 30 00:08:22.212702 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 30 00:08:22.212952 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 30 00:08:22.213206 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 30 00:08:22.215542 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Oct 30 00:08:22.215765 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Oct 30 00:08:22.215973 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Oct 30 00:08:22.216194 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:08:22.216442 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Oct 30 00:08:22.216644 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Oct 30 00:08:22.216897 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 00:08:22.217156 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Oct 30 00:08:22.219489 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Oct 30 00:08:22.219531 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 00:08:22.219559 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 00:08:22.219605 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 00:08:22.219627 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 00:08:22.219646 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 30 00:08:22.219679 kernel: iommu: Default domain type: Translated Oct 30 00:08:22.219702 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 00:08:22.219726 kernel: efivars: Registered efivars operations Oct 30 00:08:22.219750 kernel: PCI: Using ACPI for IRQ routing Oct 30 00:08:22.219774 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 00:08:22.219811 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Oct 30 00:08:22.219836 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Oct 30 00:08:22.219859 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Oct 30 00:08:22.219883 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Oct 30 00:08:22.219915 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Oct 30 00:08:22.219941 kernel: vgaarb: loaded Oct 30 00:08:22.219965 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 00:08:22.219985 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 00:08:22.220005 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 00:08:22.220026 kernel: pnp: PnP ACPI init Oct 30 00:08:22.220049 kernel: pnp: PnP ACPI: found 7 devices Oct 30 00:08:22.220070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 00:08:22.220093 kernel: NET: Registered PF_INET protocol family Oct 30 00:08:22.220122 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 30 00:08:22.220143 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 30 00:08:22.220166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 00:08:22.220188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 00:08:22.220209 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 30 00:08:22.220230 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 30 00:08:22.220254 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 30 00:08:22.220279 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 30 00:08:22.220302 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 00:08:22.220354 kernel: NET: Registered PF_XDP protocol family Oct 30 00:08:22.220611 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 00:08:22.220829 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 00:08:22.221032 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 00:08:22.221232 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Oct 30 00:08:22.223550 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 30 00:08:22.223599 kernel: PCI: CLS 0 bytes, default 64 Oct 30 00:08:22.223638 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 30 00:08:22.223666 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Oct 30 00:08:22.223694 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 30 00:08:22.223722 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 30 00:08:22.223756 kernel: clocksource: Switched to clocksource tsc Oct 30 00:08:22.223942 kernel: Initialise system trusted keyrings Oct 30 00:08:22.224003 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 30 00:08:22.224034 kernel: Key type asymmetric registered Oct 30 00:08:22.224071 kernel: Asymmetric key parser 'x509' registered Oct 30 00:08:22.224109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 00:08:22.224136 kernel: io scheduler mq-deadline registered Oct 30 00:08:22.224163 kernel: io scheduler kyber registered Oct 30 00:08:22.224189 kernel: io scheduler bfq registered Oct 30 00:08:22.224217 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 00:08:22.224245 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 30 00:08:22.224587 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Oct 30 00:08:22.224626 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 30 00:08:22.224900 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Oct 30 00:08:22.224946 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 30 00:08:22.225220 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Oct 30 00:08:22.225261 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 00:08:22.225293 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:08:22.227378 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 30 00:08:22.227432 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Oct 30 00:08:22.227464 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Oct 30 00:08:22.227759 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Oct 30 00:08:22.227798 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 00:08:22.227818 kernel: i8042: Warning: Keylock active Oct 30 00:08:22.227837 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 00:08:22.227856 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 00:08:22.228079 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 30 00:08:22.228279 kernel: rtc_cmos 00:00: registered as rtc0 Oct 30 00:08:22.228499 kernel: rtc_cmos 00:00: setting system clock to 2025-10-30T00:08:21 UTC (1761782901) Oct 30 00:08:22.228704 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 30 00:08:22.228729 kernel: intel_pstate: CPU model not supported Oct 30 00:08:22.228751 kernel: pstore: Using crash dump compression: deflate Oct 30 00:08:22.228772 kernel: pstore: Registered efi_pstore as persistent store backend Oct 30 00:08:22.228792 kernel: NET: Registered PF_INET6 protocol family Oct 30 00:08:22.228810 kernel: Segment Routing with IPv6 Oct 30 00:08:22.228831 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 00:08:22.228851 kernel: NET: Registered PF_PACKET protocol family Oct 30 00:08:22.228872 kernel: Key type dns_resolver registered Oct 30 00:08:22.228891 kernel: IPI shorthand broadcast: enabled Oct 30 00:08:22.228916 kernel: sched_clock: Marking stable (4001006334, 971283066)->(5326881748, -354592348) Oct 30 00:08:22.228936 kernel: registered taskstats version 1 Oct 30 00:08:22.228955 kernel: Loading compiled-in X.509 certificates Oct 30 00:08:22.228975 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 815fc40077fbc06b8d9e8a6016fea83aecff0a2a' Oct 30 00:08:22.228995 kernel: Demotion targets for Node 0: null Oct 30 00:08:22.229015 kernel: Key type .fscrypt registered Oct 30 00:08:22.229034 kernel: Key type fscrypt-provisioning registered Oct 30 00:08:22.229053 kernel: ima: Allocated hash algorithm: sha1 Oct 30 00:08:22.229081 kernel: ima: No architecture policies found Oct 30 00:08:22.229106 kernel: clk: Disabling unused clocks Oct 30 00:08:22.229126 kernel: Warning: unable to open an initial console. Oct 30 00:08:22.229146 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 00:08:22.229167 kernel: Freeing unused kernel image (initmem) memory: 45544K Oct 30 00:08:22.229187 kernel: Write protecting the kernel read-only data: 40960k Oct 30 00:08:22.229207 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Oct 30 00:08:22.229227 kernel: Run /init as init process Oct 30 00:08:22.229247 kernel: with arguments: Oct 30 00:08:22.229266 kernel: /init Oct 30 00:08:22.229289 kernel: with environment: Oct 30 00:08:22.229308 kernel: HOME=/ Oct 30 00:08:22.231420 kernel: TERM=linux Oct 30 00:08:22.231461 systemd[1]: Successfully made /usr/ read-only. Oct 30 00:08:22.231501 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:08:22.231535 systemd[1]: Detected virtualization google. Oct 30 00:08:22.231568 systemd[1]: Detected architecture x86-64. Oct 30 00:08:22.231609 systemd[1]: Running in initrd. Oct 30 00:08:22.231641 systemd[1]: No hostname configured, using default hostname. Oct 30 00:08:22.231676 systemd[1]: Hostname set to . Oct 30 00:08:22.231709 systemd[1]: Initializing machine ID from random generator. Oct 30 00:08:22.231742 systemd[1]: Queued start job for default target initrd.target. Oct 30 00:08:22.231776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:08:22.231832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:08:22.231872 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 00:08:22.231906 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:08:22.231941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 00:08:22.231978 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 00:08:22.232015 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 00:08:22.232053 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 00:08:22.232095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:08:22.232130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:08:22.232164 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:08:22.232199 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:08:22.232234 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:08:22.232268 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:08:22.232302 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:08:22.232715 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:08:22.232975 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 00:08:22.233010 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 00:08:22.233259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:08:22.233515 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:08:22.233758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:08:22.233793 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:08:22.234035 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 00:08:22.234199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:08:22.234240 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 00:08:22.234275 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 00:08:22.234309 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 00:08:22.234401 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:08:22.234436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:08:22.234471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:08:22.234506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 00:08:22.234549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:08:22.234585 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 00:08:22.234621 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:08:22.234708 systemd-journald[191]: Collecting audit messages is disabled. Oct 30 00:08:22.234785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:08:22.234823 systemd-journald[191]: Journal started Oct 30 00:08:22.234893 systemd-journald[191]: Runtime Journal (/run/log/journal/611065dc02ba4bd48de87fd5238584f0) is 8M, max 148.6M, 140.6M free. Oct 30 00:08:22.198371 systemd-modules-load[192]: Inserted module 'overlay' Oct 30 00:08:22.240229 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:08:22.245216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:08:22.253835 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 00:08:22.259480 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 00:08:22.264536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:08:22.268510 kernel: Bridge firewalling registered Oct 30 00:08:22.265339 systemd-modules-load[192]: Inserted module 'br_netfilter' Oct 30 00:08:22.284104 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:08:22.291412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:08:22.307023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:08:22.310043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:08:22.323883 systemd-tmpfiles[212]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 00:08:22.330373 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:08:22.339594 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 00:08:22.342934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:08:22.351907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:08:22.364780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:08:22.390162 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:08:22.453799 systemd-resolved[231]: Positive Trust Anchors: Oct 30 00:08:22.454554 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:08:22.454638 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:08:22.470816 systemd-resolved[231]: Defaulting to hostname 'linux'. Oct 30 00:08:22.475252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:08:22.479755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:08:22.543381 kernel: SCSI subsystem initialized Oct 30 00:08:22.557371 kernel: Loading iSCSI transport class v2.0-870. Oct 30 00:08:22.571392 kernel: iscsi: registered transport (tcp) Oct 30 00:08:22.600371 kernel: iscsi: registered transport (qla4xxx) Oct 30 00:08:22.600458 kernel: QLogic iSCSI HBA Driver Oct 30 00:08:22.627741 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:08:22.650559 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:08:22.658811 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:08:22.730310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 00:08:22.733493 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 00:08:22.800429 kernel: raid6: avx2x4 gen() 17622 MB/s Oct 30 00:08:22.818402 kernel: raid6: avx2x2 gen() 17696 MB/s Oct 30 00:08:22.836032 kernel: raid6: avx2x1 gen() 13930 MB/s Oct 30 00:08:22.836122 kernel: raid6: using algorithm avx2x2 gen() 17696 MB/s Oct 30 00:08:22.854107 kernel: raid6: .... xor() 18194 MB/s, rmw enabled Oct 30 00:08:22.854178 kernel: raid6: using avx2x2 recovery algorithm Oct 30 00:08:22.879381 kernel: xor: automatically using best checksumming function avx Oct 30 00:08:23.082390 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 00:08:23.091954 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:08:23.096644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:08:23.144066 systemd-udevd[440]: Using default interface naming scheme 'v255'. Oct 30 00:08:23.155011 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:08:23.162815 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 00:08:23.197781 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Oct 30 00:08:23.236046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:08:23.238838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:08:23.346972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:08:23.353536 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 00:08:23.485770 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Oct 30 00:08:23.523367 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 00:08:23.554730 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 30 00:08:23.640380 kernel: scsi host0: Virtio SCSI HBA Oct 30 00:08:23.649103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:08:23.649458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:08:23.669365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:08:23.674565 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Oct 30 00:08:23.680287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:08:23.684837 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:08:23.723616 kernel: AES CTR mode by8 optimization enabled Oct 30 00:08:23.743396 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Oct 30 00:08:23.745632 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Oct 30 00:08:23.748357 kernel: sd 0:0:1:0: [sda] Write Protect is off Oct 30 00:08:23.748771 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Oct 30 00:08:23.749127 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 30 00:08:23.759488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:08:23.770920 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 00:08:23.771027 kernel: GPT:17805311 != 33554431 Oct 30 00:08:23.771060 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 00:08:23.772453 kernel: GPT:17805311 != 33554431 Oct 30 00:08:23.772509 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 00:08:23.774360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 00:08:23.775916 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Oct 30 00:08:23.870166 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Oct 30 00:08:23.884997 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 00:08:23.901290 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Oct 30 00:08:23.936598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Oct 30 00:08:23.951713 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Oct 30 00:08:23.951944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Oct 30 00:08:23.957301 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:08:23.962502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:08:23.967528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:08:23.974214 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 00:08:23.981578 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 00:08:24.007760 disk-uuid[595]: Primary Header is updated. Oct 30 00:08:24.007760 disk-uuid[595]: Secondary Entries is updated. Oct 30 00:08:24.007760 disk-uuid[595]: Secondary Header is updated. Oct 30 00:08:24.013546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:08:24.025368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 00:08:24.051357 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 00:08:25.069983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 00:08:25.072787 disk-uuid[601]: The operation has completed successfully. Oct 30 00:08:25.173268 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 00:08:25.173483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 00:08:25.233299 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 00:08:25.262834 sh[617]: Success Oct 30 00:08:25.288581 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 00:08:25.290104 kernel: device-mapper: uevent: version 1.0.3 Oct 30 00:08:25.290162 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 00:08:25.305368 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Oct 30 00:08:25.394538 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:08:25.400916 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 00:08:25.418772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 00:08:25.438436 kernel: BTRFS: device fsid ad8523d8-35e6-44b9-a685-e8d871101da4 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (629) Oct 30 00:08:25.438563 kernel: BTRFS info (device dm-0): first mount of filesystem ad8523d8-35e6-44b9-a685-e8d871101da4 Oct 30 00:08:25.441355 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:08:25.469734 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 30 00:08:25.469894 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 00:08:25.469930 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 00:08:25.475996 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 00:08:25.477479 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:08:25.481572 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 00:08:25.484437 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 00:08:25.495341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 00:08:25.550406 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (662) Oct 30 00:08:25.554517 kernel: BTRFS info (device sda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:08:25.554609 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:08:25.564409 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 00:08:25.564535 kernel: BTRFS info (device sda6): turning on async discard Oct 30 00:08:25.564570 kernel: BTRFS info (device sda6): enabling free space tree Oct 30 00:08:25.573416 kernel: BTRFS info (device sda6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:08:25.575185 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 00:08:25.585643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 00:08:25.702396 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:08:25.725231 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:08:25.865253 systemd-networkd[798]: lo: Link UP Oct 30 00:08:25.865273 systemd-networkd[798]: lo: Gained carrier Oct 30 00:08:25.871391 systemd-networkd[798]: Enumeration completed Oct 30 00:08:25.872033 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:08:25.872042 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:08:25.877260 ignition[723]: Ignition 2.22.0 Oct 30 00:08:25.874961 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:08:25.877275 ignition[723]: Stage: fetch-offline Oct 30 00:08:25.876423 systemd[1]: Reached target network.target - Network. Oct 30 00:08:25.877366 ignition[723]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:25.881377 systemd-networkd[798]: eth0: Link UP Oct 30 00:08:25.877394 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:25.882925 systemd-networkd[798]: eth0: Gained carrier Oct 30 00:08:25.877561 ignition[723]: parsed url from cmdline: "" Oct 30 00:08:25.882955 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:08:25.877570 ignition[723]: no config URL provided Oct 30 00:08:25.884682 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:08:25.877581 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:08:25.893228 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 30 00:08:25.877598 ignition[723]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:08:25.895467 systemd-networkd[798]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8.c.flatcar-212911.internal' to 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:08:25.877624 ignition[723]: failed to fetch config: resource requires networking Oct 30 00:08:25.895947 systemd-networkd[798]: eth0: DHCPv4 address 10.128.0.23/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 30 00:08:25.877928 ignition[723]: Ignition finished successfully Oct 30 00:08:25.960493 ignition[807]: Ignition 2.22.0 Oct 30 00:08:25.960512 ignition[807]: Stage: fetch Oct 30 00:08:25.960792 ignition[807]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:25.960814 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:25.960971 ignition[807]: parsed url from cmdline: "" Oct 30 00:08:25.960979 ignition[807]: no config URL provided Oct 30 00:08:25.960989 ignition[807]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:08:25.961003 ignition[807]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:08:25.961067 ignition[807]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Oct 30 00:08:25.979721 unknown[807]: fetched base config from "system" Oct 30 00:08:25.967675 ignition[807]: GET result: OK Oct 30 00:08:25.979737 unknown[807]: fetched base config from "system" Oct 30 00:08:25.967963 ignition[807]: parsing config with SHA512: 5e4f54fd29e9bfe2b17e55d88805c030a0b7be0bac54372c9d8d58123996a50367e893fdccecf3608c3498762d5c0fda07737183bc5e842d8b653d600f38261b Oct 30 00:08:25.979748 unknown[807]: fetched user config from "gcp" Oct 30 00:08:25.981084 ignition[807]: fetch: fetch complete Oct 30 00:08:25.986254 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 30 00:08:25.981095 ignition[807]: fetch: fetch passed Oct 30 00:08:25.994277 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 00:08:25.981185 ignition[807]: Ignition finished successfully Oct 30 00:08:26.043910 ignition[815]: Ignition 2.22.0 Oct 30 00:08:26.043930 ignition[815]: Stage: kargs Oct 30 00:08:26.044168 ignition[815]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:26.048649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 00:08:26.044187 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:26.045585 ignition[815]: kargs: kargs passed Oct 30 00:08:26.059666 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 00:08:26.045655 ignition[815]: Ignition finished successfully Oct 30 00:08:26.103776 ignition[822]: Ignition 2.22.0 Oct 30 00:08:26.103799 ignition[822]: Stage: disks Oct 30 00:08:26.104065 ignition[822]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:26.109907 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 00:08:26.104089 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:26.113970 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 00:08:26.105621 ignition[822]: disks: disks passed Oct 30 00:08:26.119504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 00:08:26.105744 ignition[822]: Ignition finished successfully Oct 30 00:08:26.124515 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:08:26.130001 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:08:26.132957 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:08:26.142475 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 00:08:26.200784 systemd-fsck[831]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Oct 30 00:08:26.219346 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 00:08:26.226560 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 00:08:26.430373 kernel: EXT4-fs (sda9): mounted filesystem 02607114-2ead-44bc-a76e-2d51f82b108e r/w with ordered data mode. Quota mode: none. Oct 30 00:08:26.431232 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 00:08:26.433726 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 00:08:26.441368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:08:26.448031 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 00:08:26.453488 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 00:08:26.453585 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 00:08:26.453631 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:08:26.469489 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 00:08:26.479618 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 00:08:26.494543 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (839) Oct 30 00:08:26.494600 kernel: BTRFS info (device sda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:08:26.494660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:08:26.499623 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 00:08:26.499705 kernel: BTRFS info (device sda6): turning on async discard Oct 30 00:08:26.499757 kernel: BTRFS info (device sda6): enabling free space tree Oct 30 00:08:26.502780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:08:26.613200 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 00:08:26.625027 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Oct 30 00:08:26.636995 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 00:08:26.647776 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 00:08:26.812891 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 00:08:26.819438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 00:08:26.823984 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 00:08:26.855870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 00:08:26.858727 kernel: BTRFS info (device sda6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:08:26.889476 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 00:08:26.908534 ignition[952]: INFO : Ignition 2.22.0 Oct 30 00:08:26.908534 ignition[952]: INFO : Stage: mount Oct 30 00:08:26.915742 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:26.915742 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:26.915742 ignition[952]: INFO : mount: mount passed Oct 30 00:08:26.915742 ignition[952]: INFO : Ignition finished successfully Oct 30 00:08:26.913905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 00:08:26.921484 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 00:08:26.949844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:08:26.983531 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (963) Oct 30 00:08:26.987112 kernel: BTRFS info (device sda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:08:26.987206 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:08:26.996746 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 00:08:26.996828 kernel: BTRFS info (device sda6): turning on async discard Oct 30 00:08:26.996845 kernel: BTRFS info (device sda6): enabling free space tree Oct 30 00:08:26.999761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:08:27.044394 ignition[979]: INFO : Ignition 2.22.0 Oct 30 00:08:27.044394 ignition[979]: INFO : Stage: files Oct 30 00:08:27.051187 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:27.051187 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:27.051187 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Oct 30 00:08:27.051187 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 00:08:27.051187 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 00:08:27.066480 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 00:08:27.066480 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 00:08:27.066480 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 00:08:27.066480 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:08:27.066480 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 00:08:27.057503 unknown[979]: wrote ssh authorized keys file for user: core Oct 30 00:08:27.290540 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 00:08:27.684576 systemd-networkd[798]: eth0: Gained IPv6LL Oct 30 00:08:27.687070 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:08:27.692787 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:08:27.733555 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:08:27.733555 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:08:27.733555 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 30 00:08:28.206691 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 00:08:28.845000 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:08:28.845000 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 00:08:28.856527 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:08:28.856527 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:08:28.856527 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 00:08:28.856527 ignition[979]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 30 00:08:28.856527 ignition[979]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 00:08:28.856527 ignition[979]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:08:28.856527 ignition[979]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:08:28.856527 ignition[979]: INFO : files: files passed Oct 30 00:08:28.856527 ignition[979]: INFO : Ignition finished successfully Oct 30 00:08:28.857013 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 00:08:28.865172 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 00:08:28.881822 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 00:08:28.892840 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 00:08:28.898344 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 00:08:28.920520 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:08:28.920520 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:08:28.925031 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:08:28.924555 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:08:28.932765 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 00:08:28.938667 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 00:08:28.995418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 00:08:28.995633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 00:08:29.001468 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 00:08:29.004563 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 00:08:29.009087 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 00:08:29.012731 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 00:08:29.052160 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:08:29.059587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 00:08:29.106084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:08:29.108280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:08:29.108942 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 00:08:29.116836 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 00:08:29.117169 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:08:29.123512 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 00:08:29.128828 systemd[1]: Stopped target basic.target - Basic System. Oct 30 00:08:29.133045 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 00:08:29.138112 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:08:29.143127 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 00:08:29.148049 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:08:29.155105 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 00:08:29.157783 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:08:29.164746 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 00:08:29.167791 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 00:08:29.172663 systemd[1]: Stopped target swap.target - Swaps. Oct 30 00:08:29.175433 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 00:08:29.176633 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:08:29.183904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:08:29.189952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:08:29.194746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 00:08:29.195411 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:08:29.197569 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 00:08:29.198974 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 00:08:29.206959 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 00:08:29.208172 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:08:29.211549 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 00:08:29.212458 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 00:08:29.219162 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 00:08:29.230546 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 00:08:29.230911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:08:29.246632 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 00:08:29.248135 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 00:08:29.249730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:08:29.256675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 00:08:29.256916 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:08:29.274962 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 00:08:29.278477 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 00:08:29.290351 ignition[1034]: INFO : Ignition 2.22.0 Oct 30 00:08:29.290351 ignition[1034]: INFO : Stage: umount Oct 30 00:08:29.290351 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:08:29.300173 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 30 00:08:29.300173 ignition[1034]: INFO : umount: umount passed Oct 30 00:08:29.300173 ignition[1034]: INFO : Ignition finished successfully Oct 30 00:08:29.296165 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 00:08:29.296373 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 00:08:29.303671 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 00:08:29.304020 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 00:08:29.311616 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 00:08:29.311741 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 00:08:29.314936 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 30 00:08:29.315051 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 30 00:08:29.323808 systemd[1]: Stopped target network.target - Network. Oct 30 00:08:29.326828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 00:08:29.326975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:08:29.331196 systemd[1]: Stopped target paths.target - Path Units. Oct 30 00:08:29.335999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 00:08:29.337923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:08:29.341203 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 00:08:29.345965 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 00:08:29.350081 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 00:08:29.350540 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:08:29.354047 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 00:08:29.354283 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:08:29.358004 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 00:08:29.358709 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 00:08:29.362251 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 00:08:29.362689 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 00:08:29.365738 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 00:08:29.368996 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 00:08:29.377551 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 00:08:29.379445 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 00:08:29.379604 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 00:08:29.390153 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 00:08:29.390531 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 00:08:29.390680 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 00:08:29.395285 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 00:08:29.395733 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 00:08:29.395867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 00:08:29.401937 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 00:08:29.403631 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 00:08:29.404019 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:08:29.407899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 00:08:29.408151 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 00:08:29.415061 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 00:08:29.428510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 00:08:29.428691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:08:29.433649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 00:08:29.433794 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:08:29.438854 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 00:08:29.438973 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 00:08:29.446552 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 00:08:29.446678 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:08:29.452220 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:08:29.466917 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 00:08:29.467065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:08:29.483482 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 00:08:29.485649 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:08:29.492054 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 00:08:29.492192 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 00:08:29.495868 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 00:08:29.496036 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 00:08:29.500656 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 00:08:29.500752 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:08:29.506823 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 00:08:29.506929 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:08:29.522856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 00:08:29.523022 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 00:08:29.531258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 00:08:29.531853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:08:29.542540 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 00:08:29.549575 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 00:08:29.549931 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:08:29.553005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 00:08:29.553121 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:08:29.563770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:08:29.563887 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:08:29.571062 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 30 00:08:29.571201 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 00:08:29.571303 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:08:29.577892 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 00:08:29.578111 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 00:08:29.582429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 00:08:29.589734 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 00:08:29.621604 systemd[1]: Switching root. Oct 30 00:08:29.667958 systemd-journald[191]: Journal stopped Oct 30 00:08:32.110066 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Oct 30 00:08:32.110475 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 00:08:32.110525 kernel: SELinux: policy capability open_perms=1 Oct 30 00:08:32.110555 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 00:08:32.110581 kernel: SELinux: policy capability always_check_network=0 Oct 30 00:08:32.110610 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 00:08:32.110640 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 00:08:32.112050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 00:08:32.112104 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 00:08:32.112135 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 00:08:32.112164 kernel: audit: type=1403 audit(1761782910.295:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 00:08:32.112200 systemd[1]: Successfully loaded SELinux policy in 78.203ms. Oct 30 00:08:32.112231 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.600ms. Oct 30 00:08:32.112266 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:08:32.112307 systemd[1]: Detected virtualization google. Oct 30 00:08:32.115410 systemd[1]: Detected architecture x86-64. Oct 30 00:08:32.120413 systemd[1]: Detected first boot. Oct 30 00:08:32.120469 systemd[1]: Initializing machine ID from random generator. Oct 30 00:08:32.120502 zram_generator::config[1077]: No configuration found. Oct 30 00:08:32.120550 kernel: Guest personality initialized and is inactive Oct 30 00:08:32.120581 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 00:08:32.120610 kernel: Initialized host personality Oct 30 00:08:32.120639 kernel: NET: Registered PF_VSOCK protocol family Oct 30 00:08:32.120671 systemd[1]: Populated /etc with preset unit settings. Oct 30 00:08:32.120704 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 00:08:32.120735 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 00:08:32.120768 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 00:08:32.120805 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 00:08:32.120840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 00:08:32.120872 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 00:08:32.120907 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 00:08:32.120940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 00:08:32.120990 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 00:08:32.121024 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 00:08:32.121064 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 00:08:32.121095 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 00:08:32.121127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:08:32.121160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:08:32.121194 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 00:08:32.121226 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 00:08:32.121270 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 00:08:32.121305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:08:32.121378 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 00:08:32.121413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:08:32.121446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:08:32.121481 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 00:08:32.121515 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 00:08:32.121549 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 00:08:32.121581 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 00:08:32.121621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:08:32.121654 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:08:32.121687 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:08:32.121719 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:08:32.121750 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 00:08:32.121784 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 00:08:32.121817 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 00:08:32.121859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:08:32.121897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:08:32.121931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:08:32.121974 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 00:08:32.122009 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 00:08:32.122042 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 00:08:32.122073 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 00:08:32.122114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:32.122154 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 00:08:32.122189 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 00:08:32.122221 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 00:08:32.122257 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 00:08:32.122290 systemd[1]: Reached target machines.target - Containers. Oct 30 00:08:32.128007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 00:08:32.128085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:08:32.128132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:08:32.128170 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 00:08:32.128203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:08:32.128238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:08:32.128272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:08:32.128306 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 00:08:32.128369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:08:32.128399 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 00:08:32.128434 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 00:08:32.128462 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 00:08:32.128490 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 00:08:32.128519 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 00:08:32.128548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:08:32.128577 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:08:32.128606 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:08:32.128643 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:08:32.128817 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 00:08:32.128859 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 00:08:32.128893 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:08:32.128925 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 00:08:32.128969 systemd[1]: Stopped verity-setup.service. Oct 30 00:08:32.129004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:32.129039 kernel: fuse: init (API version 7.41) Oct 30 00:08:32.129071 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 00:08:32.129107 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 00:08:32.129146 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 00:08:32.129181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 00:08:32.129211 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 00:08:32.129235 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 00:08:32.129263 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:08:32.129297 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 00:08:32.129359 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 00:08:32.129394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:08:32.129436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:08:32.129468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:08:32.129501 kernel: loop: module loaded Oct 30 00:08:32.129532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:08:32.129566 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 00:08:32.129599 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 00:08:32.129633 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:08:32.129667 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 00:08:32.129703 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:08:32.129743 kernel: ACPI: bus type drm_connector registered Oct 30 00:08:32.129774 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:08:32.129807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:08:32.129841 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:08:32.129876 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:08:32.129923 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:08:32.129972 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 00:08:32.130011 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 00:08:32.130051 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 00:08:32.130086 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:08:32.130119 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 00:08:32.130154 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 00:08:32.130188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:08:32.130222 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 00:08:32.130255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:08:32.130296 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 00:08:32.140509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:08:32.140633 systemd-journald[1150]: Collecting audit messages is disabled. Oct 30 00:08:32.140719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:08:32.140754 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 00:08:32.140787 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 00:08:32.140822 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 00:08:32.140855 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 00:08:32.140891 systemd-journald[1150]: Journal started Oct 30 00:08:32.141164 systemd-journald[1150]: Runtime Journal (/run/log/journal/431c1b262a2a475fb00b71ef2362cc37) is 8M, max 148.6M, 140.6M free. Oct 30 00:08:32.143622 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:08:31.315502 systemd[1]: Queued start job for default target multi-user.target. Oct 30 00:08:31.341873 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 30 00:08:31.342714 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 00:08:32.152648 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 00:08:32.155909 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 00:08:32.198987 kernel: loop0: detected capacity change from 0 to 229808 Oct 30 00:08:32.186801 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 00:08:32.198767 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 00:08:32.240647 systemd-journald[1150]: Time spent on flushing to /var/log/journal/431c1b262a2a475fb00b71ef2362cc37 is 163.958ms for 962 entries. Oct 30 00:08:32.240647 systemd-journald[1150]: System Journal (/var/log/journal/431c1b262a2a475fb00b71ef2362cc37) is 8M, max 584.8M, 576.8M free. Oct 30 00:08:32.469230 systemd-journald[1150]: Received client request to flush runtime journal. Oct 30 00:08:32.471540 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 00:08:32.471606 kernel: loop1: detected capacity change from 0 to 110984 Oct 30 00:08:32.244176 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 00:08:32.252634 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 00:08:32.290393 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:08:32.413257 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 00:08:32.423783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:08:32.428318 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 00:08:32.432685 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:08:32.437174 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 00:08:32.475507 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 00:08:32.491582 kernel: loop2: detected capacity change from 0 to 128016 Oct 30 00:08:32.521394 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Oct 30 00:08:32.522687 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Oct 30 00:08:32.536496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:08:32.566694 kernel: loop3: detected capacity change from 0 to 50736 Oct 30 00:08:32.667381 kernel: loop4: detected capacity change from 0 to 229808 Oct 30 00:08:32.702372 kernel: loop5: detected capacity change from 0 to 110984 Oct 30 00:08:32.741670 kernel: loop6: detected capacity change from 0 to 128016 Oct 30 00:08:32.789779 kernel: loop7: detected capacity change from 0 to 50736 Oct 30 00:08:32.815860 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Oct 30 00:08:32.821421 (sd-merge)[1223]: Merged extensions into '/usr'. Oct 30 00:08:32.834091 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 00:08:32.834119 systemd[1]: Reloading... Oct 30 00:08:33.057638 zram_generator::config[1249]: No configuration found. Oct 30 00:08:33.460365 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 00:08:33.628549 systemd[1]: Reloading finished in 793 ms. Oct 30 00:08:33.647430 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 00:08:33.651111 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 00:08:33.665562 systemd[1]: Starting ensure-sysext.service... Oct 30 00:08:33.670560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:08:33.721355 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 00:08:33.721407 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 00:08:33.721846 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 00:08:33.722318 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 00:08:33.724799 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 00:08:33.725494 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Oct 30 00:08:33.725536 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Oct 30 00:08:33.725555 systemd[1]: Reloading... Oct 30 00:08:33.725986 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Oct 30 00:08:33.745140 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:08:33.745171 systemd-tmpfiles[1290]: Skipping /boot Oct 30 00:08:33.772877 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:08:33.772906 systemd-tmpfiles[1290]: Skipping /boot Oct 30 00:08:33.888364 zram_generator::config[1317]: No configuration found. Oct 30 00:08:34.159563 systemd[1]: Reloading finished in 433 ms. Oct 30 00:08:34.181628 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 00:08:34.213476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:08:34.234915 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:08:34.252498 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 00:08:34.273338 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 00:08:34.296524 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:08:34.310445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:08:34.325272 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 00:08:34.347414 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:34.349146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:08:34.353468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:08:34.366606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:08:34.380481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:08:34.389711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:08:34.389965 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:08:34.399565 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 00:08:34.409433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:34.413371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:08:34.414821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:08:34.418873 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Oct 30 00:08:34.448409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:08:34.450205 augenrules[1388]: No rules Oct 30 00:08:34.452629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:08:34.465241 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:08:34.465904 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:08:34.477116 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 00:08:34.489140 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:08:34.489846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:08:34.509160 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 00:08:34.529688 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:34.530499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:08:34.535683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:08:34.544651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:08:34.545563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:08:34.545815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:08:34.551801 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 00:08:34.560481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:34.562128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:08:34.575202 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 00:08:34.587059 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 00:08:34.630977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:08:34.631396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:08:34.643660 systemd[1]: Finished ensure-sysext.service. Oct 30 00:08:34.652227 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 00:08:34.697488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:34.701688 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:08:34.710740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:08:34.713727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:08:34.726958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:08:34.741505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:08:34.756507 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 30 00:08:34.764753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:08:34.764850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:08:34.772676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:08:34.781536 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 00:08:34.790552 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:08:34.790607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:08:34.794496 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 00:08:34.826287 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:08:34.828361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:08:34.848054 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Oct 30 00:08:34.848984 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Oct 30 00:08:34.859911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:08:34.860716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:08:34.873554 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:08:34.875105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:08:34.885734 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 30 00:08:34.894536 augenrules[1439]: /sbin/augenrules: No change Oct 30 00:08:34.906543 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Oct 30 00:08:34.915509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:08:34.915653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:08:34.943937 augenrules[1475]: No rules Oct 30 00:08:34.949386 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:08:34.949788 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:08:34.987380 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 00:08:35.019596 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Oct 30 00:08:35.037353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 30 00:08:35.048356 kernel: ACPI: button: Power Button [PWRF] Oct 30 00:08:35.059361 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Oct 30 00:08:35.059506 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Oct 30 00:08:35.075358 kernel: ACPI: button: Sleep Button [SLPF] Oct 30 00:08:35.309606 systemd-networkd[1445]: lo: Link UP Oct 30 00:08:35.309628 systemd-networkd[1445]: lo: Gained carrier Oct 30 00:08:35.314896 systemd-networkd[1445]: Enumeration completed Oct 30 00:08:35.315478 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:08:35.316424 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:08:35.316445 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:08:35.317778 systemd-networkd[1445]: eth0: Link UP Oct 30 00:08:35.318062 systemd-networkd[1445]: eth0: Gained carrier Oct 30 00:08:35.318124 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:08:35.337362 kernel: EDAC MC: Ver: 3.0.0 Oct 30 00:08:35.333593 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 00:08:35.336812 systemd-networkd[1445]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8.c.flatcar-212911.internal' to 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:08:35.336835 systemd-networkd[1445]: eth0: DHCPv4 address 10.128.0.23/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 30 00:08:35.347674 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 00:08:35.398476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:08:35.425148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Oct 30 00:08:35.446213 systemd-resolved[1372]: Positive Trust Anchors: Oct 30 00:08:35.446243 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:08:35.446312 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:08:35.463886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 00:08:35.487977 systemd-resolved[1372]: Defaulting to hostname 'linux'. Oct 30 00:08:35.501373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:08:35.514792 systemd[1]: Reached target network.target - Network. Oct 30 00:08:35.522487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:08:35.541641 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 00:08:35.558118 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 00:08:35.691108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:08:35.701925 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:08:35.710804 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 00:08:35.722561 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 00:08:35.732535 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 00:08:35.742860 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 00:08:35.752735 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 00:08:35.763528 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 00:08:35.773537 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 00:08:35.773602 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:08:35.781500 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:08:35.791578 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 00:08:35.802771 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 00:08:35.813015 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 00:08:35.823816 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 00:08:35.834537 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 00:08:35.854487 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 00:08:35.864119 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 00:08:35.875571 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 00:08:35.885749 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:08:35.894528 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:08:35.902596 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:08:35.902654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:08:35.904376 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 00:08:35.922304 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 30 00:08:35.942901 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 00:08:35.958588 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 00:08:35.972471 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 00:08:35.988003 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 00:08:35.996527 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:08:36.003706 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 00:08:36.013364 jq[1527]: false Oct 30 00:08:36.017576 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 00:08:36.032607 systemd[1]: Started ntpd.service - Network Time Service. Oct 30 00:08:36.044538 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 00:08:36.049813 extend-filesystems[1528]: Found /dev/sda6 Oct 30 00:08:36.081167 extend-filesystems[1528]: Found /dev/sda9 Oct 30 00:08:36.057165 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 00:08:36.070664 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 00:08:36.090775 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing passwd entry cache Oct 30 00:08:36.091219 extend-filesystems[1528]: Checking size of /dev/sda9 Oct 30 00:08:36.085790 oslogin_cache_refresh[1531]: Refreshing passwd entry cache Oct 30 00:08:36.093502 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 00:08:36.111916 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Oct 30 00:08:36.114923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 00:08:36.119594 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting users, quitting Oct 30 00:08:36.119594 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:08:36.119594 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing group entry cache Oct 30 00:08:36.117675 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 00:08:36.116628 oslogin_cache_refresh[1531]: Failure getting users, quitting Oct 30 00:08:36.116688 oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:08:36.116789 oslogin_cache_refresh[1531]: Refreshing group entry cache Oct 30 00:08:36.123574 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting groups, quitting Oct 30 00:08:36.123574 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:08:36.121534 oslogin_cache_refresh[1531]: Failure getting groups, quitting Oct 30 00:08:36.121557 oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:08:36.134889 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 00:08:36.137601 coreos-metadata[1524]: Oct 30 00:08:36.136 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Oct 30 00:08:36.140519 coreos-metadata[1524]: Oct 30 00:08:36.140 INFO Fetch successful Oct 30 00:08:36.142276 coreos-metadata[1524]: Oct 30 00:08:36.140 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Oct 30 00:08:36.151139 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 00:08:36.155745 coreos-metadata[1524]: Oct 30 00:08:36.155 INFO Fetch successful Oct 30 00:08:36.155745 coreos-metadata[1524]: Oct 30 00:08:36.155 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Oct 30 00:08:36.156759 coreos-metadata[1524]: Oct 30 00:08:36.156 INFO Fetch successful Oct 30 00:08:36.156759 coreos-metadata[1524]: Oct 30 00:08:36.156 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Oct 30 00:08:36.160900 coreos-metadata[1524]: Oct 30 00:08:36.157 INFO Fetch successful Oct 30 00:08:36.163064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 00:08:36.163549 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 00:08:36.167803 update_engine[1550]: I20251030 00:08:36.165789 1550 main.cc:92] Flatcar Update Engine starting Oct 30 00:08:36.164118 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 00:08:36.165449 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 00:08:36.175120 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 00:08:36.176150 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 00:08:36.179474 extend-filesystems[1528]: Resized partition /dev/sda9 Oct 30 00:08:36.196291 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 00:08:36.203629 jq[1552]: true Oct 30 00:08:36.197431 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 00:08:36.206159 extend-filesystems[1560]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 00:08:36.249156 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Oct 30 00:08:36.291643 jq[1564]: true Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:31:58 UTC 2025 (1): Starting Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: ---------------------------------------------------- Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: ntp-4 is maintained by Network Time Foundation, Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: corporation. Support and training for ntp-4 are Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: available at https://www.nwtime.org/support Oct 30 00:08:36.326488 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: ---------------------------------------------------- Oct 30 00:08:36.325611 ntpd[1536]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:31:58 UTC 2025 (1): Starting Oct 30 00:08:36.329761 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 30 00:08:36.325899 ntpd[1536]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 30 00:08:36.325925 ntpd[1536]: ---------------------------------------------------- Oct 30 00:08:36.325941 ntpd[1536]: ntp-4 is maintained by Network Time Foundation, Oct 30 00:08:36.325956 ntpd[1536]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 30 00:08:36.325969 ntpd[1536]: corporation. Support and training for ntp-4 are Oct 30 00:08:36.325982 ntpd[1536]: available at https://www.nwtime.org/support Oct 30 00:08:36.325999 ntpd[1536]: ---------------------------------------------------- Oct 30 00:08:36.339225 ntpd[1536]: proto: precision = 0.106 usec (-23) Oct 30 00:08:36.343550 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: proto: precision = 0.106 usec (-23) Oct 30 00:08:36.343707 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 00:08:36.347402 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: basedate set to 2025-10-17 Oct 30 00:08:36.347402 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: gps base set to 2025-10-19 (week 2389) Oct 30 00:08:36.345301 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 00:08:36.344864 ntpd[1536]: basedate set to 2025-10-17 Oct 30 00:08:36.352512 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Listen and drop on 0 v6wildcard [::]:123 Oct 30 00:08:36.352512 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 30 00:08:36.352512 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Listen normally on 2 lo 127.0.0.1:123 Oct 30 00:08:36.352512 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Listen normally on 3 eth0 10.128.0.23:123 Oct 30 00:08:36.344892 ntpd[1536]: gps base set to 2025-10-19 (week 2389) Oct 30 00:08:36.348751 ntpd[1536]: Listen and drop on 0 v6wildcard [::]:123 Oct 30 00:08:36.348816 ntpd[1536]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 30 00:08:36.351894 ntpd[1536]: Listen normally on 2 lo 127.0.0.1:123 Oct 30 00:08:36.352298 ntpd[1536]: Listen normally on 3 eth0 10.128.0.23:123 Oct 30 00:08:36.389113 kernel: ntpd[1536]: segfault at 24 ip 0000560809ed3aeb sp 00007ffe169c20b0 error 4 in ntpd[68aeb,560809e71000+80000] likely on CPU 0 (core 0, socket 0) Oct 30 00:08:36.389218 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Oct 30 00:08:36.353445 ntpd[1536]: Listen normally on 4 lo [::1]:123 Oct 30 00:08:36.395453 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: Listen normally on 4 lo [::1]:123 Oct 30 00:08:36.395453 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: bind(21) AF_INET6 [fe80::4001:aff:fe80:17%2]:123 flags 0x811 failed: Cannot assign requested address Oct 30 00:08:36.395453 ntpd[1536]: 30 Oct 00:08:36 ntpd[1536]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:17%2]:123 Oct 30 00:08:36.353506 ntpd[1536]: bind(21) AF_INET6 [fe80::4001:aff:fe80:17%2]:123 flags 0x811 failed: Cannot assign requested address Oct 30 00:08:36.353541 ntpd[1536]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:17%2]:123 Oct 30 00:08:36.459440 tar[1563]: linux-amd64/LICENSE Oct 30 00:08:36.461476 tar[1563]: linux-amd64/helm Oct 30 00:08:36.481719 systemd-coredump[1594]: Process 1536 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Oct 30 00:08:36.491472 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Oct 30 00:08:36.516149 systemd[1]: Started systemd-coredump@0-1594-0.service - Process Core Dump (PID 1594/UID 0). Oct 30 00:08:36.534760 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 00:08:36.534827 systemd-logind[1548]: Watching system buttons on /dev/input/event3 (Sleep Button) Oct 30 00:08:36.534863 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 00:08:36.536628 systemd-logind[1548]: New seat seat0. Oct 30 00:08:36.538171 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 00:08:36.604990 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Oct 30 00:08:36.657653 extend-filesystems[1560]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 30 00:08:36.657653 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 30 00:08:36.657653 extend-filesystems[1560]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Oct 30 00:08:36.678094 extend-filesystems[1528]: Resized filesystem in /dev/sda9 Oct 30 00:08:36.658828 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 00:08:36.678291 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:08:36.659235 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 00:08:36.703498 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 00:08:36.752117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:08:36.771308 systemd[1]: Starting sshkeys.service... Oct 30 00:08:36.789040 dbus-daemon[1525]: [system] SELinux support is enabled Oct 30 00:08:36.789469 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 00:08:36.819302 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 00:08:36.820417 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 00:08:36.826991 dbus-daemon[1525]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1445 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 30 00:08:36.831654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 00:08:36.833855 update_engine[1550]: I20251030 00:08:36.832268 1550 update_check_scheduler.cc:74] Next update check in 11m35s Oct 30 00:08:36.831813 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 00:08:36.846681 systemd-coredump[1599]: Process 1536 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1536: #0 0x0000560809ed3aeb n/a (ntpd + 0x68aeb) #1 0x0000560809e7ccdf n/a (ntpd + 0x11cdf) #2 0x0000560809e7d575 n/a (ntpd + 0x12575) #3 0x0000560809e78d8a n/a (ntpd + 0xdd8a) #4 0x0000560809e7a5d3 n/a (ntpd + 0xf5d3) #5 0x0000560809e82fd1 n/a (ntpd + 0x17fd1) #6 0x0000560809e73c2d n/a (ntpd + 0x8c2d) #7 0x00007f42bc27b16c n/a (libc.so.6 + 0x2716c) #8 0x00007f42bc27b229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000560809e73c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Oct 30 00:08:36.850396 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Oct 30 00:08:36.869181 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 30 00:08:36.850638 systemd[1]: ntpd.service: Failed with result 'core-dump'. Oct 30 00:08:36.871186 systemd[1]: systemd-coredump@0-1594-0.service: Deactivated successfully. Oct 30 00:08:36.940910 systemd[1]: Started update-engine.service - Update Engine. Oct 30 00:08:36.961858 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Oct 30 00:08:36.969555 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 30 00:08:36.989607 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 30 00:08:37.004641 systemd[1]: Started ntpd.service - Network Time Service. Oct 30 00:08:37.018719 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 30 00:08:37.053745 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 00:08:37.192403 coreos-metadata[1620]: Oct 30 00:08:37.191 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Oct 30 00:08:37.202996 coreos-metadata[1620]: Oct 30 00:08:37.202 INFO Fetch failed with 404: resource not found Oct 30 00:08:37.202996 coreos-metadata[1620]: Oct 30 00:08:37.202 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Oct 30 00:08:37.207096 coreos-metadata[1620]: Oct 30 00:08:37.203 INFO Fetch successful Oct 30 00:08:37.207096 coreos-metadata[1620]: Oct 30 00:08:37.203 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Oct 30 00:08:37.207096 coreos-metadata[1620]: Oct 30 00:08:37.203 INFO Fetch failed with 404: resource not found Oct 30 00:08:37.207096 coreos-metadata[1620]: Oct 30 00:08:37.203 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Oct 30 00:08:37.207951 coreos-metadata[1620]: Oct 30 00:08:37.207 INFO Fetch failed with 404: resource not found Oct 30 00:08:37.207951 coreos-metadata[1620]: Oct 30 00:08:37.207 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Oct 30 00:08:37.211702 coreos-metadata[1620]: Oct 30 00:08:37.208 INFO Fetch successful Oct 30 00:08:37.215675 unknown[1620]: wrote ssh authorized keys file for user: core Oct 30 00:08:37.228479 ntpd[1621]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:31:58 UTC 2025 (1): Starting Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:31:58 UTC 2025 (1): Starting Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: ---------------------------------------------------- Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: ntp-4 is maintained by Network Time Foundation, Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: corporation. Support and training for ntp-4 are Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: available at https://www.nwtime.org/support Oct 30 00:08:37.230405 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: ---------------------------------------------------- Oct 30 00:08:37.228610 ntpd[1621]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 30 00:08:37.228633 ntpd[1621]: ---------------------------------------------------- Oct 30 00:08:37.228650 ntpd[1621]: ntp-4 is maintained by Network Time Foundation, Oct 30 00:08:37.228670 ntpd[1621]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 30 00:08:37.228688 ntpd[1621]: corporation. Support and training for ntp-4 are Oct 30 00:08:37.228705 ntpd[1621]: available at https://www.nwtime.org/support Oct 30 00:08:37.228726 ntpd[1621]: ---------------------------------------------------- Oct 30 00:08:37.243626 ntpd[1621]: proto: precision = 0.192 usec (-22) Oct 30 00:08:37.246123 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: proto: precision = 0.192 usec (-22) Oct 30 00:08:37.246123 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: basedate set to 2025-10-17 Oct 30 00:08:37.246123 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: gps base set to 2025-10-19 (week 2389) Oct 30 00:08:37.246123 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Listen and drop on 0 v6wildcard [::]:123 Oct 30 00:08:37.246123 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 30 00:08:37.244005 ntpd[1621]: basedate set to 2025-10-17 Oct 30 00:08:37.244030 ntpd[1621]: gps base set to 2025-10-19 (week 2389) Oct 30 00:08:37.244176 ntpd[1621]: Listen and drop on 0 v6wildcard [::]:123 Oct 30 00:08:37.244237 ntpd[1621]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 30 00:08:37.293227 kernel: ntpd[1621]: segfault at 24 ip 0000555977376aeb sp 00007ffd79b39bf0 error 4 in ntpd[68aeb,555977314000+80000] likely on CPU 0 (core 0, socket 0) Oct 30 00:08:37.293440 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Oct 30 00:08:37.259619 ntpd[1621]: Listen normally on 2 lo 127.0.0.1:123 Oct 30 00:08:37.293695 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Listen normally on 2 lo 127.0.0.1:123 Oct 30 00:08:37.293695 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Listen normally on 3 eth0 10.128.0.23:123 Oct 30 00:08:37.293695 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: Listen normally on 4 lo [::1]:123 Oct 30 00:08:37.293695 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: bind(21) AF_INET6 [fe80::4001:aff:fe80:17%2]:123 flags 0x811 failed: Cannot assign requested address Oct 30 00:08:37.293695 ntpd[1621]: 30 Oct 00:08:37 ntpd[1621]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:17%2]:123 Oct 30 00:08:37.259691 ntpd[1621]: Listen normally on 3 eth0 10.128.0.23:123 Oct 30 00:08:37.259749 ntpd[1621]: Listen normally on 4 lo [::1]:123 Oct 30 00:08:37.259848 ntpd[1621]: bind(21) AF_INET6 [fe80::4001:aff:fe80:17%2]:123 flags 0x811 failed: Cannot assign requested address Oct 30 00:08:37.259889 ntpd[1621]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:17%2]:123 Oct 30 00:08:37.344814 update-ssh-keys[1628]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:08:37.346110 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 30 00:08:37.348500 systemd-networkd[1445]: eth0: Gained IPv6LL Oct 30 00:08:37.350375 systemd-coredump[1629]: Process 1621 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Oct 30 00:08:37.365455 systemd[1]: Finished sshkeys.service. Oct 30 00:08:37.372971 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 00:08:37.390973 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 00:08:37.418513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:08:37.432271 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 00:08:37.435101 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 30 00:08:37.438397 dbus-daemon[1525]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1622 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 30 00:08:37.443097 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Oct 30 00:08:37.458989 systemd[1]: Started systemd-coredump@1-1629-0.service - Process Core Dump (PID 1629/UID 0). Oct 30 00:08:37.471308 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 30 00:08:37.520876 systemd[1]: Starting polkit.service - Authorization Manager... Oct 30 00:08:37.532487 containerd[1578]: time="2025-10-30T00:08:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 00:08:37.536183 init.sh[1635]: + '[' -e /etc/default/instance_configs.cfg.template ']' Oct 30 00:08:37.536183 init.sh[1635]: + echo -e '[InstanceSetup]\nset_host_keys = false' Oct 30 00:08:37.540108 init.sh[1635]: + /usr/bin/google_instance_setup Oct 30 00:08:37.546973 containerd[1578]: time="2025-10-30T00:08:37.543597622Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 00:08:37.723770 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.724842235Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.214µs" Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.724892421Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.724923832Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.725165235Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.725188979Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.725232335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.725376829Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.725400198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.726024343Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.726081606Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.726109336Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:08:37.726931 containerd[1578]: time="2025-10-30T00:08:37.726128575Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 00:08:37.729291 containerd[1578]: time="2025-10-30T00:08:37.726285747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 00:08:37.731515 containerd[1578]: time="2025-10-30T00:08:37.731471781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:08:37.734879 containerd[1578]: time="2025-10-30T00:08:37.734831720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:08:37.734972 containerd[1578]: time="2025-10-30T00:08:37.734879682Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 00:08:37.735058 containerd[1578]: time="2025-10-30T00:08:37.734964803Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 00:08:37.739017 containerd[1578]: time="2025-10-30T00:08:37.738970485Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 00:08:37.739138 containerd[1578]: time="2025-10-30T00:08:37.739123777Z" level=info msg="metadata content store policy set" policy=shared Oct 30 00:08:37.760526 containerd[1578]: time="2025-10-30T00:08:37.756953583Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 00:08:37.760526 containerd[1578]: time="2025-10-30T00:08:37.757086267Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 00:08:37.760526 containerd[1578]: time="2025-10-30T00:08:37.757399547Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 00:08:37.760526 containerd[1578]: time="2025-10-30T00:08:37.758496136Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 00:08:37.761590 containerd[1578]: time="2025-10-30T00:08:37.758911268Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 00:08:37.761590 containerd[1578]: time="2025-10-30T00:08:37.761042772Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 00:08:37.761590 containerd[1578]: time="2025-10-30T00:08:37.761182447Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 00:08:37.762789 containerd[1578]: time="2025-10-30T00:08:37.761315533Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 00:08:37.763245 containerd[1578]: time="2025-10-30T00:08:37.762839314Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 00:08:37.769353 containerd[1578]: time="2025-10-30T00:08:37.763360078Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 00:08:37.769353 containerd[1578]: time="2025-10-30T00:08:37.768244344Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 00:08:37.769353 containerd[1578]: time="2025-10-30T00:08:37.768304744Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 00:08:37.778090 containerd[1578]: time="2025-10-30T00:08:37.778018171Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 00:08:37.778252 containerd[1578]: time="2025-10-30T00:08:37.778122841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 00:08:37.778252 containerd[1578]: time="2025-10-30T00:08:37.778152646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 00:08:37.778252 containerd[1578]: time="2025-10-30T00:08:37.778194001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 00:08:37.778252 containerd[1578]: time="2025-10-30T00:08:37.778213390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 00:08:37.778252 containerd[1578]: time="2025-10-30T00:08:37.778231516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 00:08:37.778540 containerd[1578]: time="2025-10-30T00:08:37.778270146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 00:08:37.778540 containerd[1578]: time="2025-10-30T00:08:37.778290710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 00:08:37.778540 containerd[1578]: time="2025-10-30T00:08:37.778312496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 00:08:37.778540 containerd[1578]: time="2025-10-30T00:08:37.778363827Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 00:08:37.778540 containerd[1578]: time="2025-10-30T00:08:37.778383835Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 00:08:37.781405 containerd[1578]: time="2025-10-30T00:08:37.779188216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 00:08:37.781405 containerd[1578]: time="2025-10-30T00:08:37.779644879Z" level=info msg="Start snapshots syncer" Oct 30 00:08:37.784774 containerd[1578]: time="2025-10-30T00:08:37.779990045Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 00:08:37.784774 containerd[1578]: time="2025-10-30T00:08:37.783389275Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 00:08:37.785098 containerd[1578]: time="2025-10-30T00:08:37.783856628Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 00:08:37.786579 containerd[1578]: time="2025-10-30T00:08:37.786143355Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 00:08:37.788063 containerd[1578]: time="2025-10-30T00:08:37.787945486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 00:08:37.790353 containerd[1578]: time="2025-10-30T00:08:37.788381273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 00:08:37.790353 containerd[1578]: time="2025-10-30T00:08:37.788423351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 00:08:37.790353 containerd[1578]: time="2025-10-30T00:08:37.788870223Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 00:08:37.790353 containerd[1578]: time="2025-10-30T00:08:37.789622528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 00:08:37.790353 containerd[1578]: time="2025-10-30T00:08:37.790040714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 00:08:37.790353 containerd[1578]: time="2025-10-30T00:08:37.790073080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 00:08:37.790715 containerd[1578]: time="2025-10-30T00:08:37.790389113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 00:08:37.792462 containerd[1578]: time="2025-10-30T00:08:37.790422260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 00:08:37.792571 containerd[1578]: time="2025-10-30T00:08:37.792482455Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 00:08:37.792655 containerd[1578]: time="2025-10-30T00:08:37.792626076Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:08:37.792832 containerd[1578]: time="2025-10-30T00:08:37.792802358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:08:37.793587 containerd[1578]: time="2025-10-30T00:08:37.793255697Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:08:37.793587 containerd[1578]: time="2025-10-30T00:08:37.793295818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:08:37.793587 containerd[1578]: time="2025-10-30T00:08:37.793315129Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 00:08:37.793587 containerd[1578]: time="2025-10-30T00:08:37.793423271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 00:08:37.793587 containerd[1578]: time="2025-10-30T00:08:37.793444590Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 00:08:37.794350 containerd[1578]: time="2025-10-30T00:08:37.794307635Z" level=info msg="runtime interface created" Oct 30 00:08:37.794455 containerd[1578]: time="2025-10-30T00:08:37.794362083Z" level=info msg="created NRI interface" Oct 30 00:08:37.794455 containerd[1578]: time="2025-10-30T00:08:37.794385422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 00:08:37.794455 containerd[1578]: time="2025-10-30T00:08:37.794414272Z" level=info msg="Connect containerd service" Oct 30 00:08:37.799264 containerd[1578]: time="2025-10-30T00:08:37.794497838Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 00:08:37.805989 containerd[1578]: time="2025-10-30T00:08:37.805931275Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:08:37.852921 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 00:08:37.867714 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 00:08:37.868237 systemd-coredump[1636]: Process 1621 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1621: #0 0x0000555977376aeb n/a (ntpd + 0x68aeb) #1 0x000055597731fcdf n/a (ntpd + 0x11cdf) #2 0x0000555977320575 n/a (ntpd + 0x12575) #3 0x000055597731bd8a n/a (ntpd + 0xdd8a) #4 0x000055597731d5d3 n/a (ntpd + 0xf5d3) #5 0x0000555977325fd1 n/a (ntpd + 0x17fd1) #6 0x0000555977316c2d n/a (ntpd + 0x8c2d) #7 0x00007ff12b2ff16c n/a (libc.so.6 + 0x2716c) #8 0x00007ff12b2ff229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000555977316c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Oct 30 00:08:37.872178 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Oct 30 00:08:37.874041 systemd[1]: ntpd.service: Failed with result 'core-dump'. Oct 30 00:08:37.893413 systemd[1]: systemd-coredump@1-1629-0.service: Deactivated successfully. Oct 30 00:08:37.950162 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 00:08:37.968175 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 00:08:37.981307 systemd[1]: Started sshd@0-10.128.0.23:22-139.178.89.65:60310.service - OpenSSH per-connection server daemon (139.178.89.65:60310). Oct 30 00:08:38.007478 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Oct 30 00:08:38.011397 systemd[1]: Started ntpd.service - Network Time Service. Oct 30 00:08:38.094722 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 00:08:38.095140 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 00:08:38.116416 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 00:08:38.170946 ntpd[1679]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:31:58 UTC 2025 (1): Starting Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:31:58 UTC 2025 (1): Starting Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: ---------------------------------------------------- Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: ntp-4 is maintained by Network Time Foundation, Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: corporation. Support and training for ntp-4 are Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: available at https://www.nwtime.org/support Oct 30 00:08:38.175572 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: ---------------------------------------------------- Oct 30 00:08:38.171057 ntpd[1679]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 30 00:08:38.171076 ntpd[1679]: ---------------------------------------------------- Oct 30 00:08:38.186410 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: proto: precision = 0.190 usec (-22) Oct 30 00:08:38.179966 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 00:08:38.171093 ntpd[1679]: ntp-4 is maintained by Network Time Foundation, Oct 30 00:08:38.171110 ntpd[1679]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 30 00:08:38.171126 ntpd[1679]: corporation. Support and training for ntp-4 are Oct 30 00:08:38.171142 ntpd[1679]: available at https://www.nwtime.org/support Oct 30 00:08:38.171158 ntpd[1679]: ---------------------------------------------------- Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: basedate set to 2025-10-17 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: gps base set to 2025-10-19 (week 2389) Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listen and drop on 0 v6wildcard [::]:123 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listen normally on 2 lo 127.0.0.1:123 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listen normally on 3 eth0 10.128.0.23:123 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listen normally on 4 lo [::1]:123 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:17%2]:123 Oct 30 00:08:38.189561 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: Listening on routing socket on fd #22 for interface updates Oct 30 00:08:38.184128 ntpd[1679]: proto: precision = 0.190 usec (-22) Oct 30 00:08:38.201591 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 30 00:08:38.201591 ntpd[1679]: 30 Oct 00:08:38 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 30 00:08:38.187357 ntpd[1679]: basedate set to 2025-10-17 Oct 30 00:08:38.187388 ntpd[1679]: gps base set to 2025-10-19 (week 2389) Oct 30 00:08:38.187536 ntpd[1679]: Listen and drop on 0 v6wildcard [::]:123 Oct 30 00:08:38.187581 ntpd[1679]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 30 00:08:38.187829 ntpd[1679]: Listen normally on 2 lo 127.0.0.1:123 Oct 30 00:08:38.187878 ntpd[1679]: Listen normally on 3 eth0 10.128.0.23:123 Oct 30 00:08:38.187924 ntpd[1679]: Listen normally on 4 lo [::1]:123 Oct 30 00:08:38.187964 ntpd[1679]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:17%2]:123 Oct 30 00:08:38.188004 ntpd[1679]: Listening on routing socket on fd #22 for interface updates Oct 30 00:08:38.199987 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 30 00:08:38.200029 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 30 00:08:38.206507 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 00:08:38.223650 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 00:08:38.232902 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 00:08:38.296923 polkitd[1640]: Started polkitd version 126 Oct 30 00:08:38.325736 containerd[1578]: time="2025-10-30T00:08:38.325464524Z" level=info msg="Start subscribing containerd event" Oct 30 00:08:38.325736 containerd[1578]: time="2025-10-30T00:08:38.325541642Z" level=info msg="Start recovering state" Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325707449Z" level=info msg="Start event monitor" Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325775910Z" level=info msg="Start cni network conf syncer for default" Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325797606Z" level=info msg="Start streaming server" Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325815730Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325826870Z" level=info msg="runtime interface starting up..." Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325838213Z" level=info msg="starting plugins..." Oct 30 00:08:38.326014 containerd[1578]: time="2025-10-30T00:08:38.325864687Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 00:08:38.327785 polkitd[1640]: Loading rules from directory /etc/polkit-1/rules.d Oct 30 00:08:38.329243 containerd[1578]: time="2025-10-30T00:08:38.329184893Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 00:08:38.329409 containerd[1578]: time="2025-10-30T00:08:38.329298575Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 00:08:38.334491 polkitd[1640]: Loading rules from directory /run/polkit-1/rules.d Oct 30 00:08:38.334624 polkitd[1640]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 30 00:08:38.335783 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 00:08:38.338510 containerd[1578]: time="2025-10-30T00:08:38.336166158Z" level=info msg="containerd successfully booted in 0.815980s" Oct 30 00:08:38.342870 polkitd[1640]: Loading rules from directory /usr/local/share/polkit-1/rules.d Oct 30 00:08:38.342953 polkitd[1640]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 30 00:08:38.343024 polkitd[1640]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 30 00:08:38.350482 polkitd[1640]: Finished loading, compiling and executing 2 rules Oct 30 00:08:38.351260 systemd[1]: Started polkit.service - Authorization Manager. Oct 30 00:08:38.355701 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 30 00:08:38.358839 polkitd[1640]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 30 00:08:38.412730 systemd-hostnamed[1622]: Hostname set to (transient) Oct 30 00:08:38.415557 systemd-resolved[1372]: System hostname changed to 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8'. Oct 30 00:08:38.628506 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 60310 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:38.637528 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:38.663687 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 00:08:38.675236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 00:08:38.724291 systemd-logind[1548]: New session 1 of user core. Oct 30 00:08:38.747423 tar[1563]: linux-amd64/README.md Oct 30 00:08:38.753237 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 00:08:38.775300 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 00:08:38.798319 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 00:08:38.817989 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 00:08:38.825737 systemd-logind[1548]: New session c1 of user core. Oct 30 00:08:39.020667 instance-setup[1642]: INFO Running google_set_multiqueue. Oct 30 00:08:39.050485 instance-setup[1642]: INFO Set channels for eth0 to 2. Oct 30 00:08:39.058379 instance-setup[1642]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Oct 30 00:08:39.064280 instance-setup[1642]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Oct 30 00:08:39.066672 instance-setup[1642]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Oct 30 00:08:39.072088 instance-setup[1642]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Oct 30 00:08:39.073759 instance-setup[1642]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Oct 30 00:08:39.079101 instance-setup[1642]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Oct 30 00:08:39.081021 instance-setup[1642]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Oct 30 00:08:39.081894 instance-setup[1642]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Oct 30 00:08:39.098089 instance-setup[1642]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Oct 30 00:08:39.104588 instance-setup[1642]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Oct 30 00:08:39.106495 instance-setup[1642]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Oct 30 00:08:39.106552 instance-setup[1642]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Oct 30 00:08:39.140216 init.sh[1635]: + /usr/bin/google_metadata_script_runner --script-type startup Oct 30 00:08:39.279835 systemd[1711]: Queued start job for default target default.target. Oct 30 00:08:39.287087 systemd[1711]: Created slice app.slice - User Application Slice. Oct 30 00:08:39.287645 systemd[1711]: Reached target paths.target - Paths. Oct 30 00:08:39.287906 systemd[1711]: Reached target timers.target - Timers. Oct 30 00:08:39.293498 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 00:08:39.327875 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 00:08:39.328112 systemd[1711]: Reached target sockets.target - Sockets. Oct 30 00:08:39.328205 systemd[1711]: Reached target basic.target - Basic System. Oct 30 00:08:39.328303 systemd[1711]: Reached target default.target - Main User Target. Oct 30 00:08:39.329311 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 00:08:39.330960 systemd[1711]: Startup finished in 484ms. Oct 30 00:08:39.348645 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 00:08:39.442819 startup-script[1746]: INFO Starting startup scripts. Oct 30 00:08:39.451454 startup-script[1746]: INFO No startup scripts found in metadata. Oct 30 00:08:39.451562 startup-script[1746]: INFO Finished running startup scripts. Oct 30 00:08:39.480722 init.sh[1635]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Oct 30 00:08:39.480976 init.sh[1635]: + daemon_pids=() Oct 30 00:08:39.481142 init.sh[1635]: + for d in accounts clock_skew network Oct 30 00:08:39.481578 init.sh[1635]: + daemon_pids+=($!) Oct 30 00:08:39.481988 init.sh[1635]: + for d in accounts clock_skew network Oct 30 00:08:39.482358 init.sh[1635]: + daemon_pids+=($!) Oct 30 00:08:39.482635 init.sh[1752]: + /usr/bin/google_accounts_daemon Oct 30 00:08:39.483364 init.sh[1753]: + /usr/bin/google_clock_skew_daemon Oct 30 00:08:39.483692 init.sh[1635]: + for d in accounts clock_skew network Oct 30 00:08:39.483934 init.sh[1635]: + daemon_pids+=($!) Oct 30 00:08:39.484476 init.sh[1635]: + NOTIFY_SOCKET=/run/systemd/notify Oct 30 00:08:39.485442 init.sh[1754]: + /usr/bin/google_network_daemon Oct 30 00:08:39.486377 init.sh[1635]: + /usr/bin/systemd-notify --ready Oct 30 00:08:39.511985 systemd[1]: Started oem-gce.service - GCE Linux Agent. Oct 30 00:08:39.527353 init.sh[1635]: + wait -n 1752 1753 1754 Oct 30 00:08:39.619894 systemd[1]: Started sshd@1-10.128.0.23:22-139.178.89.65:60322.service - OpenSSH per-connection server daemon (139.178.89.65:60322). Oct 30 00:08:40.043019 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 60322 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:40.048524 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:40.071436 systemd-logind[1548]: New session 2 of user core. Oct 30 00:08:40.075637 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 00:08:40.164493 google-clock-skew[1753]: INFO Starting Google Clock Skew daemon. Oct 30 00:08:40.174063 google-networking[1754]: INFO Starting Google Networking daemon. Oct 30 00:08:40.179640 google-clock-skew[1753]: INFO Clock drift token has changed: 0. Oct 30 00:08:40.246866 groupadd[1770]: group added to /etc/group: name=google-sudoers, GID=1000 Oct 30 00:08:40.251218 groupadd[1770]: group added to /etc/gshadow: name=google-sudoers Oct 30 00:08:40.001137 systemd-resolved[1372]: Clock change detected. Flushing caches. Oct 30 00:08:40.023783 systemd-journald[1150]: Time jumped backwards, rotating. Oct 30 00:08:40.023952 sshd[1767]: Connection closed by 139.178.89.65 port 60322 Oct 30 00:08:40.010170 google-clock-skew[1753]: INFO Synced system time with hardware clock. Oct 30 00:08:40.015089 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:40.028251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:08:40.041500 systemd[1]: sshd@1-10.128.0.23:22-139.178.89.65:60322.service: Deactivated successfully. Oct 30 00:08:40.048167 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:08:40.048548 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 00:08:40.055556 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Oct 30 00:08:40.066113 groupadd[1770]: new group: name=google-sudoers, GID=1000 Oct 30 00:08:40.076120 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 00:08:40.087470 systemd[1]: Started sshd@2-10.128.0.23:22-139.178.89.65:60326.service - OpenSSH per-connection server daemon (139.178.89.65:60326). Oct 30 00:08:40.100057 systemd[1]: Startup finished in 4.179s (kernel) + 8.478s (initrd) + 10.147s (userspace) = 22.805s. Oct 30 00:08:40.104516 systemd-logind[1548]: Removed session 2. Oct 30 00:08:40.137649 google-accounts[1752]: INFO Starting Google Accounts daemon. Oct 30 00:08:40.180310 google-accounts[1752]: WARNING OS Login not installed. Oct 30 00:08:40.184100 google-accounts[1752]: INFO Creating a new user account for 0. Oct 30 00:08:40.190732 init.sh[1798]: useradd: invalid user name '0': use --badname to ignore Oct 30 00:08:40.191500 google-accounts[1752]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Oct 30 00:08:40.448918 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 60326 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:40.452283 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:40.462645 systemd-logind[1548]: New session 3 of user core. Oct 30 00:08:40.469358 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:08:40.667796 sshd[1804]: Connection closed by 139.178.89.65 port 60326 Oct 30 00:08:40.669557 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:40.678086 systemd[1]: sshd@2-10.128.0.23:22-139.178.89.65:60326.service: Deactivated successfully. Oct 30 00:08:40.681548 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 00:08:40.683684 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Oct 30 00:08:40.686698 systemd-logind[1548]: Removed session 3. Oct 30 00:08:41.075561 kubelet[1780]: E1030 00:08:41.075468 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:08:41.079447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:08:41.079753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:08:41.080503 systemd[1]: kubelet.service: Consumed 1.388s CPU time, 267.7M memory peak. Oct 30 00:08:50.726605 systemd[1]: Started sshd@3-10.128.0.23:22-139.178.89.65:45998.service - OpenSSH per-connection server daemon (139.178.89.65:45998). Oct 30 00:08:51.032871 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 45998 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:51.035240 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:51.044098 systemd-logind[1548]: New session 4 of user core. Oct 30 00:08:51.051301 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:08:51.206989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 00:08:51.209439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:08:51.248041 sshd[1816]: Connection closed by 139.178.89.65 port 45998 Oct 30 00:08:51.248926 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:51.255568 systemd[1]: sshd@3-10.128.0.23:22-139.178.89.65:45998.service: Deactivated successfully. Oct 30 00:08:51.259568 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:08:51.261903 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:08:51.265514 systemd-logind[1548]: Removed session 4. Oct 30 00:08:51.308225 systemd[1]: Started sshd@4-10.128.0.23:22-139.178.89.65:46004.service - OpenSSH per-connection server daemon (139.178.89.65:46004). Oct 30 00:08:51.588787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:08:51.603763 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:08:51.631128 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 46004 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:51.634128 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:51.643420 systemd-logind[1548]: New session 5 of user core. Oct 30 00:08:51.652306 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:08:51.677958 kubelet[1833]: E1030 00:08:51.677866 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:08:51.683892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:08:51.684176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:08:51.684692 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.8M memory peak. Oct 30 00:08:51.842840 sshd[1839]: Connection closed by 139.178.89.65 port 46004 Oct 30 00:08:51.844364 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:51.850096 systemd[1]: sshd@4-10.128.0.23:22-139.178.89.65:46004.service: Deactivated successfully. Oct 30 00:08:51.852741 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:08:51.855880 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:08:51.857746 systemd-logind[1548]: Removed session 5. Oct 30 00:08:51.896897 systemd[1]: Started sshd@5-10.128.0.23:22-139.178.89.65:46008.service - OpenSSH per-connection server daemon (139.178.89.65:46008). Oct 30 00:08:52.200319 sshd[1846]: Accepted publickey for core from 139.178.89.65 port 46008 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:52.202116 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:52.208446 systemd-logind[1548]: New session 6 of user core. Oct 30 00:08:52.217321 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:08:52.414237 sshd[1849]: Connection closed by 139.178.89.65 port 46008 Oct 30 00:08:52.415182 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:52.421530 systemd[1]: sshd@5-10.128.0.23:22-139.178.89.65:46008.service: Deactivated successfully. Oct 30 00:08:52.424196 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:08:52.426110 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:08:52.427957 systemd-logind[1548]: Removed session 6. Oct 30 00:08:52.469315 systemd[1]: Started sshd@6-10.128.0.23:22-139.178.89.65:46018.service - OpenSSH per-connection server daemon (139.178.89.65:46018). Oct 30 00:08:52.777764 sshd[1855]: Accepted publickey for core from 139.178.89.65 port 46018 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:52.779529 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:52.787854 systemd-logind[1548]: New session 7 of user core. Oct 30 00:08:52.798542 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:08:52.974565 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:08:52.975185 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:08:52.991080 sudo[1859]: pam_unix(sudo:session): session closed for user root Oct 30 00:08:53.034896 sshd[1858]: Connection closed by 139.178.89.65 port 46018 Oct 30 00:08:53.036326 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:53.042648 systemd[1]: sshd@6-10.128.0.23:22-139.178.89.65:46018.service: Deactivated successfully. Oct 30 00:08:53.045496 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:08:53.048635 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:08:53.050817 systemd-logind[1548]: Removed session 7. Oct 30 00:08:53.091420 systemd[1]: Started sshd@7-10.128.0.23:22-139.178.89.65:46032.service - OpenSSH per-connection server daemon (139.178.89.65:46032). Oct 30 00:08:53.416637 sshd[1865]: Accepted publickey for core from 139.178.89.65 port 46032 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:53.418588 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:53.426792 systemd-logind[1548]: New session 8 of user core. Oct 30 00:08:53.432277 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:08:53.602614 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:08:53.603206 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:08:53.610720 sudo[1870]: pam_unix(sudo:session): session closed for user root Oct 30 00:08:53.625777 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:08:53.626366 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:08:53.640980 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:08:53.699378 augenrules[1892]: No rules Oct 30 00:08:53.701795 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:08:53.702230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:08:53.704407 sudo[1869]: pam_unix(sudo:session): session closed for user root Oct 30 00:08:53.749454 sshd[1868]: Connection closed by 139.178.89.65 port 46032 Oct 30 00:08:53.750384 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:53.757436 systemd[1]: sshd@7-10.128.0.23:22-139.178.89.65:46032.service: Deactivated successfully. Oct 30 00:08:53.760208 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:08:53.761919 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:08:53.764059 systemd-logind[1548]: Removed session 8. Oct 30 00:08:53.807631 systemd[1]: Started sshd@8-10.128.0.23:22-139.178.89.65:46044.service - OpenSSH per-connection server daemon (139.178.89.65:46044). Oct 30 00:08:54.120292 sshd[1901]: Accepted publickey for core from 139.178.89.65 port 46044 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:08:54.122168 sshd-session[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:54.130103 systemd-logind[1548]: New session 9 of user core. Oct 30 00:08:54.137302 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:08:54.301681 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:08:54.302417 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:08:54.807802 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:08:54.824797 (dockerd)[1923]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:08:55.192354 dockerd[1923]: time="2025-10-30T00:08:55.191742151Z" level=info msg="Starting up" Oct 30 00:08:55.193223 dockerd[1923]: time="2025-10-30T00:08:55.193180841Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:08:55.211048 dockerd[1923]: time="2025-10-30T00:08:55.210929941Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:08:55.239778 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1889890483-merged.mount: Deactivated successfully. Oct 30 00:08:55.277877 dockerd[1923]: time="2025-10-30T00:08:55.277515625Z" level=info msg="Loading containers: start." Oct 30 00:08:55.298048 kernel: Initializing XFRM netlink socket Oct 30 00:08:55.690073 systemd-networkd[1445]: docker0: Link UP Oct 30 00:08:55.696757 dockerd[1923]: time="2025-10-30T00:08:55.696683084Z" level=info msg="Loading containers: done." Oct 30 00:08:55.717785 dockerd[1923]: time="2025-10-30T00:08:55.717724296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:08:55.718032 dockerd[1923]: time="2025-10-30T00:08:55.717836442Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:08:55.718032 dockerd[1923]: time="2025-10-30T00:08:55.717978864Z" level=info msg="Initializing buildkit" Oct 30 00:08:55.756990 dockerd[1923]: time="2025-10-30T00:08:55.756916885Z" level=info msg="Completed buildkit initialization" Oct 30 00:08:55.761992 dockerd[1923]: time="2025-10-30T00:08:55.761908745Z" level=info msg="Daemon has completed initialization" Oct 30 00:08:55.762371 dockerd[1923]: time="2025-10-30T00:08:55.762116677Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:08:55.762281 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:08:56.718146 containerd[1578]: time="2025-10-30T00:08:56.718073085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 30 00:08:57.303901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051412501.mount: Deactivated successfully. Oct 30 00:08:59.342069 containerd[1578]: time="2025-10-30T00:08:59.341971696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:59.343972 containerd[1578]: time="2025-10-30T00:08:59.343582150Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114899" Oct 30 00:08:59.345498 containerd[1578]: time="2025-10-30T00:08:59.345451113Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:59.349603 containerd[1578]: time="2025-10-30T00:08:59.349554667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:59.354123 containerd[1578]: time="2025-10-30T00:08:59.354069287Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.635930287s" Oct 30 00:08:59.354335 containerd[1578]: time="2025-10-30T00:08:59.354300442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 30 00:08:59.357634 containerd[1578]: time="2025-10-30T00:08:59.356913260Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 30 00:09:01.357469 containerd[1578]: time="2025-10-30T00:09:01.357388391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:01.359363 containerd[1578]: time="2025-10-30T00:09:01.359008004Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020850" Oct 30 00:09:01.360948 containerd[1578]: time="2025-10-30T00:09:01.360898753Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:01.364827 containerd[1578]: time="2025-10-30T00:09:01.364771120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:01.366341 containerd[1578]: time="2025-10-30T00:09:01.366294958Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.009333515s" Oct 30 00:09:01.366505 containerd[1578]: time="2025-10-30T00:09:01.366480583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 30 00:09:01.367581 containerd[1578]: time="2025-10-30T00:09:01.367494903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 30 00:09:01.715169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 00:09:01.719780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:02.078883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:02.092818 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:09:02.159451 kubelet[2204]: E1030 00:09:02.159365 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:09:02.163520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:09:02.163825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:09:02.164466 systemd[1]: kubelet.service: Consumed 235ms CPU time, 108.3M memory peak. Oct 30 00:09:03.079847 containerd[1578]: time="2025-10-30T00:09:03.079756280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:03.081628 containerd[1578]: time="2025-10-30T00:09:03.081343051Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155574" Oct 30 00:09:03.083168 containerd[1578]: time="2025-10-30T00:09:03.083114967Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:03.087364 containerd[1578]: time="2025-10-30T00:09:03.087305880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:03.088866 containerd[1578]: time="2025-10-30T00:09:03.088818088Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.721269168s" Oct 30 00:09:03.089071 containerd[1578]: time="2025-10-30T00:09:03.089044930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 30 00:09:03.090035 containerd[1578]: time="2025-10-30T00:09:03.089861131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 30 00:09:04.463856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367626041.mount: Deactivated successfully. Oct 30 00:09:05.284523 containerd[1578]: time="2025-10-30T00:09:05.284444071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:05.286060 containerd[1578]: time="2025-10-30T00:09:05.285829577Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929475" Oct 30 00:09:05.287608 containerd[1578]: time="2025-10-30T00:09:05.287530361Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:05.291986 containerd[1578]: time="2025-10-30T00:09:05.291896886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:05.293038 containerd[1578]: time="2025-10-30T00:09:05.292914039Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.202615836s" Oct 30 00:09:05.293038 containerd[1578]: time="2025-10-30T00:09:05.292974537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 30 00:09:05.294079 containerd[1578]: time="2025-10-30T00:09:05.293607661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 30 00:09:06.221849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177910475.mount: Deactivated successfully. Oct 30 00:09:07.566223 containerd[1578]: time="2025-10-30T00:09:07.566150504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:07.567912 containerd[1578]: time="2025-10-30T00:09:07.567788992Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Oct 30 00:09:07.569394 containerd[1578]: time="2025-10-30T00:09:07.569351394Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:07.573050 containerd[1578]: time="2025-10-30T00:09:07.572910070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:07.575042 containerd[1578]: time="2025-10-30T00:09:07.574406120Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.280741314s" Oct 30 00:09:07.575042 containerd[1578]: time="2025-10-30T00:09:07.574450868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 30 00:09:07.575633 containerd[1578]: time="2025-10-30T00:09:07.575582187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 00:09:08.070144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259391134.mount: Deactivated successfully. Oct 30 00:09:08.078222 containerd[1578]: time="2025-10-30T00:09:08.078135602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:09:08.079406 containerd[1578]: time="2025-10-30T00:09:08.079299856Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Oct 30 00:09:08.082039 containerd[1578]: time="2025-10-30T00:09:08.080980046Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:09:08.084465 containerd[1578]: time="2025-10-30T00:09:08.084418511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:09:08.085633 containerd[1578]: time="2025-10-30T00:09:08.085579204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.956514ms" Oct 30 00:09:08.086167 containerd[1578]: time="2025-10-30T00:09:08.085636912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 00:09:08.087245 containerd[1578]: time="2025-10-30T00:09:08.087168150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 30 00:09:08.156435 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 30 00:09:08.647693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179226442.mount: Deactivated successfully. Oct 30 00:09:11.401602 containerd[1578]: time="2025-10-30T00:09:11.401509318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:11.403680 containerd[1578]: time="2025-10-30T00:09:11.403476447Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378439" Oct 30 00:09:11.405285 containerd[1578]: time="2025-10-30T00:09:11.405233911Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:11.409270 containerd[1578]: time="2025-10-30T00:09:11.409217535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:11.411091 containerd[1578]: time="2025-10-30T00:09:11.410744626Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.323536304s" Oct 30 00:09:11.411091 containerd[1578]: time="2025-10-30T00:09:11.410794265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 30 00:09:12.214895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 00:09:12.217330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:12.550358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:12.564626 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:09:12.632579 kubelet[2367]: E1030 00:09:12.632503 2367 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:09:12.637128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:09:12.637400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:09:12.638738 systemd[1]: kubelet.service: Consumed 253ms CPU time, 107.7M memory peak. Oct 30 00:09:16.934003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:16.934536 systemd[1]: kubelet.service: Consumed 253ms CPU time, 107.7M memory peak. Oct 30 00:09:16.939084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:16.989737 systemd[1]: Reload requested from client PID 2381 ('systemctl') (unit session-9.scope)... Oct 30 00:09:16.989767 systemd[1]: Reloading... Oct 30 00:09:17.239267 zram_generator::config[2431]: No configuration found. Oct 30 00:09:17.591717 systemd[1]: Reloading finished in 601 ms. Oct 30 00:09:17.675192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:17.689159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:17.690273 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:09:17.690767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:17.690851 systemd[1]: kubelet.service: Consumed 196ms CPU time, 98.6M memory peak. Oct 30 00:09:17.693425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:18.644218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:18.662889 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:09:18.725673 kubelet[2478]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:09:18.725673 kubelet[2478]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:09:18.725673 kubelet[2478]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:09:18.726298 kubelet[2478]: I1030 00:09:18.725837 2478 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:09:19.363123 kubelet[2478]: I1030 00:09:19.362970 2478 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:09:19.363123 kubelet[2478]: I1030 00:09:19.363030 2478 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:09:19.363501 kubelet[2478]: I1030 00:09:19.363456 2478 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:09:19.411045 kubelet[2478]: E1030 00:09:19.410500 2478 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:09:19.417620 kubelet[2478]: I1030 00:09:19.417575 2478 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:09:19.425895 kubelet[2478]: I1030 00:09:19.425860 2478 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:09:19.432169 kubelet[2478]: I1030 00:09:19.432104 2478 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:09:19.432680 kubelet[2478]: I1030 00:09:19.432627 2478 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:09:19.432932 kubelet[2478]: I1030 00:09:19.432667 2478 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:09:19.433170 kubelet[2478]: I1030 00:09:19.432941 2478 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:09:19.433170 kubelet[2478]: I1030 00:09:19.432960 2478 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:09:19.433304 kubelet[2478]: I1030 00:09:19.433179 2478 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:09:19.437335 kubelet[2478]: I1030 00:09:19.437156 2478 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:09:19.437335 kubelet[2478]: I1030 00:09:19.437197 2478 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:09:19.437335 kubelet[2478]: I1030 00:09:19.437239 2478 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:09:19.437335 kubelet[2478]: I1030 00:09:19.437265 2478 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:09:19.443425 kubelet[2478]: E1030 00:09:19.443367 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8&limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:09:19.444920 kubelet[2478]: E1030 00:09:19.444683 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:09:19.445246 kubelet[2478]: I1030 00:09:19.445223 2478 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:09:19.446339 kubelet[2478]: I1030 00:09:19.446241 2478 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:09:19.447693 kubelet[2478]: W1030 00:09:19.447666 2478 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:09:19.466736 kubelet[2478]: I1030 00:09:19.466680 2478 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:09:19.467125 kubelet[2478]: I1030 00:09:19.466779 2478 server.go:1289] "Started kubelet" Oct 30 00:09:19.470041 kubelet[2478]: I1030 00:09:19.468204 2478 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:09:19.470041 kubelet[2478]: I1030 00:09:19.469792 2478 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:09:19.470041 kubelet[2478]: I1030 00:09:19.469909 2478 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:09:19.475912 kubelet[2478]: I1030 00:09:19.475839 2478 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:09:19.476461 kubelet[2478]: I1030 00:09:19.476436 2478 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:09:19.481879 kubelet[2478]: I1030 00:09:19.481828 2478 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:09:19.484067 kubelet[2478]: E1030 00:09:19.481458 2478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8.18731c446e599826 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,UID:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,},FirstTimestamp:2025-10-30 00:09:19.466715174 +0000 UTC m=+0.796840708,LastTimestamp:2025-10-30 00:09:19.466715174 +0000 UTC m=+0.796840708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,}" Oct 30 00:09:19.485782 kubelet[2478]: I1030 00:09:19.485761 2478 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:09:19.486984 kubelet[2478]: E1030 00:09:19.486912 2478 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" Oct 30 00:09:19.491385 kubelet[2478]: E1030 00:09:19.490650 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="200ms" Oct 30 00:09:19.492242 kubelet[2478]: I1030 00:09:19.492214 2478 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:09:19.492357 kubelet[2478]: I1030 00:09:19.492320 2478 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:09:19.492539 kubelet[2478]: I1030 00:09:19.492500 2478 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:09:19.495350 kubelet[2478]: I1030 00:09:19.495314 2478 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:09:19.496047 kubelet[2478]: E1030 00:09:19.492942 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:09:19.497899 kubelet[2478]: I1030 00:09:19.497874 2478 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:09:19.510264 kubelet[2478]: E1030 00:09:19.510223 2478 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:09:19.541102 kubelet[2478]: I1030 00:09:19.540820 2478 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:09:19.541102 kubelet[2478]: I1030 00:09:19.540845 2478 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:09:19.541102 kubelet[2478]: I1030 00:09:19.540872 2478 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:09:19.545186 kubelet[2478]: I1030 00:09:19.542287 2478 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:09:19.548199 kubelet[2478]: I1030 00:09:19.548170 2478 policy_none.go:49] "None policy: Start" Oct 30 00:09:19.548317 kubelet[2478]: I1030 00:09:19.548225 2478 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:09:19.548317 kubelet[2478]: I1030 00:09:19.548246 2478 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:09:19.551210 kubelet[2478]: I1030 00:09:19.551183 2478 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:09:19.551784 kubelet[2478]: I1030 00:09:19.551295 2478 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:09:19.551784 kubelet[2478]: I1030 00:09:19.551332 2478 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:09:19.551784 kubelet[2478]: I1030 00:09:19.551347 2478 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:09:19.551784 kubelet[2478]: E1030 00:09:19.551434 2478 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:09:19.552551 kubelet[2478]: E1030 00:09:19.552509 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:09:19.565491 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:09:19.584222 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:09:19.587099 kubelet[2478]: E1030 00:09:19.587060 2478 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" Oct 30 00:09:19.590113 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:09:19.607085 kubelet[2478]: E1030 00:09:19.606521 2478 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:09:19.607085 kubelet[2478]: I1030 00:09:19.606834 2478 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:09:19.607085 kubelet[2478]: I1030 00:09:19.606855 2478 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:09:19.610306 kubelet[2478]: E1030 00:09:19.610260 2478 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:09:19.611354 kubelet[2478]: I1030 00:09:19.611178 2478 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:09:19.611553 kubelet[2478]: E1030 00:09:19.611331 2478 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" Oct 30 00:09:19.676693 systemd[1]: Created slice kubepods-burstable-pod5bfe1ff00817170c1d85e9eca0aab1e2.slice - libcontainer container kubepods-burstable-pod5bfe1ff00817170c1d85e9eca0aab1e2.slice. Oct 30 00:09:19.691401 kubelet[2478]: E1030 00:09:19.691236 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="400ms" Oct 30 00:09:19.693226 kubelet[2478]: I1030 00:09:19.693187 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bfe1ff00817170c1d85e9eca0aab1e2-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"5bfe1ff00817170c1d85e9eca0aab1e2\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.693587 kubelet[2478]: I1030 00:09:19.693535 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bfe1ff00817170c1d85e9eca0aab1e2-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"5bfe1ff00817170c1d85e9eca0aab1e2\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.693805 kubelet[2478]: I1030 00:09:19.693694 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694098 kubelet[2478]: I1030 00:09:19.694066 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694226 kubelet[2478]: I1030 00:09:19.694111 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694226 kubelet[2478]: I1030 00:09:19.694148 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd2a5c8764ea7834254705ef2405cdd1-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"bd2a5c8764ea7834254705ef2405cdd1\") " pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694226 kubelet[2478]: I1030 00:09:19.694180 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bfe1ff00817170c1d85e9eca0aab1e2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"5bfe1ff00817170c1d85e9eca0aab1e2\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694487 kubelet[2478]: I1030 00:09:19.694225 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694487 kubelet[2478]: I1030 00:09:19.694257 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.694487 kubelet[2478]: E1030 00:09:19.693567 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.705337 systemd[1]: Created slice kubepods-burstable-podabca9545638598d99d6bbf184f3a9060.slice - libcontainer container kubepods-burstable-podabca9545638598d99d6bbf184f3a9060.slice. Oct 30 00:09:19.711182 kubelet[2478]: E1030 00:09:19.711097 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.713051 kubelet[2478]: I1030 00:09:19.712614 2478 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.713186 kubelet[2478]: E1030 00:09:19.713049 2478 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.719201 systemd[1]: Created slice kubepods-burstable-podbd2a5c8764ea7834254705ef2405cdd1.slice - libcontainer container kubepods-burstable-podbd2a5c8764ea7834254705ef2405cdd1.slice. Oct 30 00:09:19.723452 kubelet[2478]: E1030 00:09:19.723395 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.918797 kubelet[2478]: I1030 00:09:19.918754 2478 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.919423 kubelet[2478]: E1030 00:09:19.919226 2478 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:19.997266 containerd[1578]: time="2025-10-30T00:09:19.997209870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,Uid:5bfe1ff00817170c1d85e9eca0aab1e2,Namespace:kube-system,Attempt:0,}" Oct 30 00:09:20.013419 containerd[1578]: time="2025-10-30T00:09:20.013066850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,Uid:abca9545638598d99d6bbf184f3a9060,Namespace:kube-system,Attempt:0,}" Oct 30 00:09:20.029980 containerd[1578]: time="2025-10-30T00:09:20.029907601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,Uid:bd2a5c8764ea7834254705ef2405cdd1,Namespace:kube-system,Attempt:0,}" Oct 30 00:09:20.035678 containerd[1578]: time="2025-10-30T00:09:20.035559462Z" level=info msg="connecting to shim 62a7b6c8cfb27dee633612d831da8a5c86e0c7df131c152ee2c4b5de8d110841" address="unix:///run/containerd/s/0ef6668c55c148054e8a5c323a829c2f9149f1411b4576837d003842f9a1e288" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:20.092659 kubelet[2478]: E1030 00:09:20.092329 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8?timeout=10s\": dial tcp 10.128.0.23:6443: connect: connection refused" interval="800ms" Oct 30 00:09:20.112769 containerd[1578]: time="2025-10-30T00:09:20.112686957Z" level=info msg="connecting to shim e7c33b5a8f568db64a044c5c06103edcfea1cdabe95320742aa5005dcbcd0bb4" address="unix:///run/containerd/s/4b06a1a86e84bb5c4ce677520f1de557c77b7ad5f84b24dd43dc61ce7fc1dcf8" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:20.113569 systemd[1]: Started cri-containerd-62a7b6c8cfb27dee633612d831da8a5c86e0c7df131c152ee2c4b5de8d110841.scope - libcontainer container 62a7b6c8cfb27dee633612d831da8a5c86e0c7df131c152ee2c4b5de8d110841. Oct 30 00:09:20.120648 containerd[1578]: time="2025-10-30T00:09:20.120590456Z" level=info msg="connecting to shim cf080ca739649f1d9b2daa30d15400d5de636ccf0b9a866f046a7a67090a9597" address="unix:///run/containerd/s/09cf50d85662455a20a78bb7dd18d37d61c573fe038d288d5a5513817ec86cb8" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:20.155641 kubelet[2478]: E1030 00:09:20.154853 2478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8.18731c446e599826 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,UID:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,},FirstTimestamp:2025-10-30 00:09:19.466715174 +0000 UTC m=+0.796840708,LastTimestamp:2025-10-30 00:09:19.466715174 +0000 UTC m=+0.796840708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,}" Oct 30 00:09:20.184302 systemd[1]: Started cri-containerd-e7c33b5a8f568db64a044c5c06103edcfea1cdabe95320742aa5005dcbcd0bb4.scope - libcontainer container e7c33b5a8f568db64a044c5c06103edcfea1cdabe95320742aa5005dcbcd0bb4. Oct 30 00:09:20.205561 systemd[1]: Started cri-containerd-cf080ca739649f1d9b2daa30d15400d5de636ccf0b9a866f046a7a67090a9597.scope - libcontainer container cf080ca739649f1d9b2daa30d15400d5de636ccf0b9a866f046a7a67090a9597. Oct 30 00:09:20.321687 containerd[1578]: time="2025-10-30T00:09:20.321446094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,Uid:5bfe1ff00817170c1d85e9eca0aab1e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"62a7b6c8cfb27dee633612d831da8a5c86e0c7df131c152ee2c4b5de8d110841\"" Oct 30 00:09:20.328935 kubelet[2478]: E1030 00:09:20.328800 2478 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc48" Oct 30 00:09:20.330361 kubelet[2478]: I1030 00:09:20.330329 2478 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:20.330838 kubelet[2478]: E1030 00:09:20.330755 2478 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.23:6443/api/v1/nodes\": dial tcp 10.128.0.23:6443: connect: connection refused" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:20.341375 containerd[1578]: time="2025-10-30T00:09:20.341320339Z" level=info msg="CreateContainer within sandbox \"62a7b6c8cfb27dee633612d831da8a5c86e0c7df131c152ee2c4b5de8d110841\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:09:20.357293 containerd[1578]: time="2025-10-30T00:09:20.357188527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,Uid:abca9545638598d99d6bbf184f3a9060,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf080ca739649f1d9b2daa30d15400d5de636ccf0b9a866f046a7a67090a9597\"" Oct 30 00:09:20.360402 kubelet[2478]: E1030 00:09:20.360355 2478 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037" Oct 30 00:09:20.368279 containerd[1578]: time="2025-10-30T00:09:20.368104395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8,Uid:bd2a5c8764ea7834254705ef2405cdd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7c33b5a8f568db64a044c5c06103edcfea1cdabe95320742aa5005dcbcd0bb4\"" Oct 30 00:09:20.374103 containerd[1578]: time="2025-10-30T00:09:20.373281637Z" level=info msg="Container 6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:20.375969 kubelet[2478]: E1030 00:09:20.375893 2478 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc48" Oct 30 00:09:20.378986 containerd[1578]: time="2025-10-30T00:09:20.378945481Z" level=info msg="CreateContainer within sandbox \"cf080ca739649f1d9b2daa30d15400d5de636ccf0b9a866f046a7a67090a9597\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:09:20.383569 containerd[1578]: time="2025-10-30T00:09:20.383529824Z" level=info msg="CreateContainer within sandbox \"e7c33b5a8f568db64a044c5c06103edcfea1cdabe95320742aa5005dcbcd0bb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:09:20.387666 kubelet[2478]: E1030 00:09:20.387609 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:09:20.392480 containerd[1578]: time="2025-10-30T00:09:20.392415831Z" level=info msg="CreateContainer within sandbox \"62a7b6c8cfb27dee633612d831da8a5c86e0c7df131c152ee2c4b5de8d110841\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b\"" Oct 30 00:09:20.394515 containerd[1578]: time="2025-10-30T00:09:20.394203697Z" level=info msg="StartContainer for \"6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b\"" Oct 30 00:09:20.397246 containerd[1578]: time="2025-10-30T00:09:20.397204358Z" level=info msg="connecting to shim 6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b" address="unix:///run/containerd/s/0ef6668c55c148054e8a5c323a829c2f9149f1411b4576837d003842f9a1e288" protocol=ttrpc version=3 Oct 30 00:09:20.404214 containerd[1578]: time="2025-10-30T00:09:20.403692568Z" level=info msg="Container ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:20.409423 containerd[1578]: time="2025-10-30T00:09:20.409367540Z" level=info msg="Container 7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:20.429896 kubelet[2478]: E1030 00:09:20.429820 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:09:20.437632 containerd[1578]: time="2025-10-30T00:09:20.437163607Z" level=info msg="CreateContainer within sandbox \"e7c33b5a8f568db64a044c5c06103edcfea1cdabe95320742aa5005dcbcd0bb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329\"" Oct 30 00:09:20.437234 systemd[1]: Started cri-containerd-6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b.scope - libcontainer container 6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b. Oct 30 00:09:20.438464 containerd[1578]: time="2025-10-30T00:09:20.438378006Z" level=info msg="CreateContainer within sandbox \"cf080ca739649f1d9b2daa30d15400d5de636ccf0b9a866f046a7a67090a9597\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658\"" Oct 30 00:09:20.441952 containerd[1578]: time="2025-10-30T00:09:20.439603648Z" level=info msg="StartContainer for \"7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658\"" Oct 30 00:09:20.441952 containerd[1578]: time="2025-10-30T00:09:20.440112151Z" level=info msg="StartContainer for \"ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329\"" Oct 30 00:09:20.441952 containerd[1578]: time="2025-10-30T00:09:20.441827441Z" level=info msg="connecting to shim 7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658" address="unix:///run/containerd/s/09cf50d85662455a20a78bb7dd18d37d61c573fe038d288d5a5513817ec86cb8" protocol=ttrpc version=3 Oct 30 00:09:20.445739 containerd[1578]: time="2025-10-30T00:09:20.445670792Z" level=info msg="connecting to shim ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329" address="unix:///run/containerd/s/4b06a1a86e84bb5c4ce677520f1de557c77b7ad5f84b24dd43dc61ce7fc1dcf8" protocol=ttrpc version=3 Oct 30 00:09:20.487664 systemd[1]: Started cri-containerd-ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329.scope - libcontainer container ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329. Oct 30 00:09:20.515275 systemd[1]: Started cri-containerd-7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658.scope - libcontainer container 7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658. Oct 30 00:09:20.550319 kubelet[2478]: E1030 00:09:20.550232 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8&limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:09:20.615145 containerd[1578]: time="2025-10-30T00:09:20.614354683Z" level=info msg="StartContainer for \"6f02d6757a86f418915dffa925d89ee1d6f723775f3ca079abfefee5a7a5ea7b\" returns successfully" Oct 30 00:09:20.659483 containerd[1578]: time="2025-10-30T00:09:20.659398578Z" level=info msg="StartContainer for \"ce3a810bc638e2e6886f07fe02c066b6bc685d389705fc03465d4d5823d6f329\" returns successfully" Oct 30 00:09:20.689526 kubelet[2478]: E1030 00:09:20.688817 2478 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:09:20.702071 containerd[1578]: time="2025-10-30T00:09:20.701877706Z" level=info msg="StartContainer for \"7d4f6c3d48a4ce8568d775a666efb0670060deff928767b0b91d684a0e8a5658\" returns successfully" Oct 30 00:09:21.137164 kubelet[2478]: I1030 00:09:21.137124 2478 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:21.608032 kubelet[2478]: E1030 00:09:21.607972 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:21.608550 kubelet[2478]: E1030 00:09:21.608509 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:21.618126 kubelet[2478]: E1030 00:09:21.617487 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:22.295944 update_engine[1550]: I20251030 00:09:22.295063 1550 update_attempter.cc:509] Updating boot flags... Oct 30 00:09:22.625225 kubelet[2478]: E1030 00:09:22.625083 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:22.625783 kubelet[2478]: E1030 00:09:22.625739 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:22.628754 kubelet[2478]: E1030 00:09:22.628467 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:23.624343 kubelet[2478]: E1030 00:09:23.624294 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:23.624878 kubelet[2478]: E1030 00:09:23.624832 2478 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.448318 kubelet[2478]: I1030 00:09:25.448262 2478 apiserver.go:52] "Watching apiserver" Oct 30 00:09:25.492529 kubelet[2478]: I1030 00:09:25.492484 2478 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:09:25.557609 kubelet[2478]: E1030 00:09:25.557364 2478 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.623248 kubelet[2478]: I1030 00:09:25.623074 2478 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.623248 kubelet[2478]: E1030 00:09:25.623132 2478 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\": node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" not found" Oct 30 00:09:25.692755 kubelet[2478]: I1030 00:09:25.691423 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.758047 kubelet[2478]: E1030 00:09:25.757973 2478 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.758536 kubelet[2478]: I1030 00:09:25.758326 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.765331 kubelet[2478]: E1030 00:09:25.765124 2478 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.765331 kubelet[2478]: I1030 00:09:25.765169 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:25.770452 kubelet[2478]: E1030 00:09:25.770396 2478 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:26.007454 kubelet[2478]: I1030 00:09:26.007094 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:26.010777 kubelet[2478]: E1030 00:09:26.010186 2478 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:27.654883 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-9.scope)... Oct 30 00:09:27.654913 systemd[1]: Reloading... Oct 30 00:09:27.712334 kubelet[2478]: I1030 00:09:27.711721 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:27.723608 kubelet[2478]: I1030 00:09:27.723328 2478 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Oct 30 00:09:27.846064 zram_generator::config[2828]: No configuration found. Oct 30 00:09:28.211158 systemd[1]: Reloading finished in 555 ms. Oct 30 00:09:28.252038 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:28.271213 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:09:28.271638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:28.271755 systemd[1]: kubelet.service: Consumed 1.462s CPU time, 132.6M memory peak. Oct 30 00:09:28.275441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:09:28.636049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:09:28.650680 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:09:28.761642 kubelet[2876]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:09:28.761642 kubelet[2876]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:09:28.761642 kubelet[2876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:09:28.762641 kubelet[2876]: I1030 00:09:28.761745 2876 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:09:28.780970 kubelet[2876]: I1030 00:09:28.780900 2876 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:09:28.780970 kubelet[2876]: I1030 00:09:28.780943 2876 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:09:28.781963 kubelet[2876]: I1030 00:09:28.781924 2876 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:09:28.787154 kubelet[2876]: I1030 00:09:28.787105 2876 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 00:09:28.797897 kubelet[2876]: I1030 00:09:28.796283 2876 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:09:28.822071 kubelet[2876]: I1030 00:09:28.821963 2876 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:09:28.831161 kubelet[2876]: I1030 00:09:28.830570 2876 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:09:28.831161 kubelet[2876]: I1030 00:09:28.830913 2876 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:09:28.831423 kubelet[2876]: I1030 00:09:28.830950 2876 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:09:28.831423 kubelet[2876]: I1030 00:09:28.831259 2876 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:09:28.831423 kubelet[2876]: I1030 00:09:28.831292 2876 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:09:28.831423 kubelet[2876]: I1030 00:09:28.831365 2876 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:09:28.831742 kubelet[2876]: I1030 00:09:28.831589 2876 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:09:28.834467 kubelet[2876]: I1030 00:09:28.831611 2876 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:09:28.834467 kubelet[2876]: I1030 00:09:28.834109 2876 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:09:28.836756 kubelet[2876]: I1030 00:09:28.835950 2876 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:09:28.840024 kubelet[2876]: I1030 00:09:28.839944 2876 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:09:28.840870 kubelet[2876]: I1030 00:09:28.840819 2876 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:09:28.899784 kubelet[2876]: I1030 00:09:28.899583 2876 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:09:28.899784 kubelet[2876]: I1030 00:09:28.899691 2876 server.go:1289] "Started kubelet" Oct 30 00:09:28.903529 kubelet[2876]: I1030 00:09:28.903118 2876 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:09:28.905678 kubelet[2876]: I1030 00:09:28.904558 2876 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:09:28.908293 kubelet[2876]: I1030 00:09:28.907414 2876 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:09:28.916638 kubelet[2876]: I1030 00:09:28.916528 2876 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:09:28.920653 kubelet[2876]: I1030 00:09:28.919354 2876 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:09:28.925642 kubelet[2876]: I1030 00:09:28.925313 2876 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:09:28.926593 kubelet[2876]: I1030 00:09:28.926556 2876 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:09:28.930254 kubelet[2876]: I1030 00:09:28.929182 2876 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:09:28.930254 kubelet[2876]: I1030 00:09:28.929392 2876 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:09:28.932397 kubelet[2876]: I1030 00:09:28.932333 2876 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:09:28.932558 kubelet[2876]: I1030 00:09:28.932473 2876 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:09:28.934209 kubelet[2876]: E1030 00:09:28.933774 2876 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:09:28.952945 kubelet[2876]: I1030 00:09:28.952558 2876 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:09:29.037038 kubelet[2876]: I1030 00:09:29.036964 2876 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:09:29.051917 kubelet[2876]: I1030 00:09:29.051867 2876 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:09:29.052536 kubelet[2876]: I1030 00:09:29.052244 2876 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:09:29.052926 kubelet[2876]: I1030 00:09:29.052807 2876 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:09:29.052926 kubelet[2876]: I1030 00:09:29.052831 2876 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:09:29.059243 kubelet[2876]: E1030 00:09:29.059140 2876 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:09:29.152452 kubelet[2876]: I1030 00:09:29.152236 2876 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153119 2876 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153162 2876 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153369 2876 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153384 2876 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153524 2876 policy_none.go:49] "None policy: Start" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153543 2876 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153564 2876 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:09:29.154285 kubelet[2876]: I1030 00:09:29.153809 2876 state_mem.go:75] "Updated machine memory state" Oct 30 00:09:29.159956 kubelet[2876]: E1030 00:09:29.159925 2876 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:09:29.164344 kubelet[2876]: E1030 00:09:29.164247 2876 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:09:29.164917 kubelet[2876]: I1030 00:09:29.164890 2876 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:09:29.165064 kubelet[2876]: I1030 00:09:29.164915 2876 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:09:29.167441 kubelet[2876]: I1030 00:09:29.167412 2876 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:09:29.170397 kubelet[2876]: E1030 00:09:29.170175 2876 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:09:29.289369 kubelet[2876]: I1030 00:09:29.289314 2876 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.307970 kubelet[2876]: I1030 00:09:29.307921 2876 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.308220 kubelet[2876]: I1030 00:09:29.308067 2876 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.363475 kubelet[2876]: I1030 00:09:29.363393 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.365235 kubelet[2876]: I1030 00:09:29.363763 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.367673 kubelet[2876]: I1030 00:09:29.363413 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.388536 kubelet[2876]: I1030 00:09:29.388474 2876 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Oct 30 00:09:29.394607 kubelet[2876]: I1030 00:09:29.394552 2876 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Oct 30 00:09:29.394900 kubelet[2876]: I1030 00:09:29.394840 2876 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Oct 30 00:09:29.395001 kubelet[2876]: E1030 00:09:29.394921 2876 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" already exists" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433490 kubelet[2876]: I1030 00:09:29.432830 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bfe1ff00817170c1d85e9eca0aab1e2-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"5bfe1ff00817170c1d85e9eca0aab1e2\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433490 kubelet[2876]: I1030 00:09:29.432917 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433490 kubelet[2876]: I1030 00:09:29.432960 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433490 kubelet[2876]: I1030 00:09:29.433005 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433835 kubelet[2876]: I1030 00:09:29.433076 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433835 kubelet[2876]: I1030 00:09:29.433116 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd2a5c8764ea7834254705ef2405cdd1-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"bd2a5c8764ea7834254705ef2405cdd1\") " pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433835 kubelet[2876]: I1030 00:09:29.433154 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bfe1ff00817170c1d85e9eca0aab1e2-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"5bfe1ff00817170c1d85e9eca0aab1e2\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.433835 kubelet[2876]: I1030 00:09:29.433191 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bfe1ff00817170c1d85e9eca0aab1e2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"5bfe1ff00817170c1d85e9eca0aab1e2\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.434085 kubelet[2876]: I1030 00:09:29.433230 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abca9545638598d99d6bbf184f3a9060-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" (UID: \"abca9545638598d99d6bbf184f3a9060\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:29.839110 kubelet[2876]: I1030 00:09:29.837664 2876 apiserver.go:52] "Watching apiserver" Oct 30 00:09:29.930140 kubelet[2876]: I1030 00:09:29.930093 2876 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:09:29.971762 kubelet[2876]: I1030 00:09:29.971632 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" podStartSLOduration=2.971606813 podStartE2EDuration="2.971606813s" podCreationTimestamp="2025-10-30 00:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:09:29.956870925 +0000 UTC m=+1.298003285" watchObservedRunningTime="2025-10-30 00:09:29.971606813 +0000 UTC m=+1.312739130" Oct 30 00:09:29.989079 kubelet[2876]: I1030 00:09:29.988987 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" podStartSLOduration=0.988962676 podStartE2EDuration="988.962676ms" podCreationTimestamp="2025-10-30 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:09:29.972546567 +0000 UTC m=+1.313678882" watchObservedRunningTime="2025-10-30 00:09:29.988962676 +0000 UTC m=+1.330094986" Oct 30 00:09:29.989785 kubelet[2876]: I1030 00:09:29.989656 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" podStartSLOduration=0.989635714 podStartE2EDuration="989.635714ms" podCreationTimestamp="2025-10-30 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:09:29.988574846 +0000 UTC m=+1.329707186" watchObservedRunningTime="2025-10-30 00:09:29.989635714 +0000 UTC m=+1.330768029" Oct 30 00:09:30.112074 kubelet[2876]: I1030 00:09:30.111401 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:30.128761 kubelet[2876]: I1030 00:09:30.128613 2876 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Oct 30 00:09:30.128761 kubelet[2876]: E1030 00:09:30.128703 2876 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" already exists" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:09:33.558427 kubelet[2876]: I1030 00:09:33.557752 2876 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:09:33.559067 containerd[1578]: time="2025-10-30T00:09:33.558171642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:09:33.560454 kubelet[2876]: I1030 00:09:33.560026 2876 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:09:34.254183 systemd[1]: Created slice kubepods-besteffort-podeeb23f64_61db_4965_8018_86c8aa5a8c80.slice - libcontainer container kubepods-besteffort-podeeb23f64_61db_4965_8018_86c8aa5a8c80.slice. Oct 30 00:09:34.262719 kubelet[2876]: I1030 00:09:34.262640 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb23f64-61db-4965-8018-86c8aa5a8c80-lib-modules\") pod \"kube-proxy-qzg85\" (UID: \"eeb23f64-61db-4965-8018-86c8aa5a8c80\") " pod="kube-system/kube-proxy-qzg85" Oct 30 00:09:34.262900 kubelet[2876]: I1030 00:09:34.262766 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eeb23f64-61db-4965-8018-86c8aa5a8c80-kube-proxy\") pod \"kube-proxy-qzg85\" (UID: \"eeb23f64-61db-4965-8018-86c8aa5a8c80\") " pod="kube-system/kube-proxy-qzg85" Oct 30 00:09:34.262900 kubelet[2876]: I1030 00:09:34.262800 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb23f64-61db-4965-8018-86c8aa5a8c80-xtables-lock\") pod \"kube-proxy-qzg85\" (UID: \"eeb23f64-61db-4965-8018-86c8aa5a8c80\") " pod="kube-system/kube-proxy-qzg85" Oct 30 00:09:34.262900 kubelet[2876]: I1030 00:09:34.262867 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvpk\" (UniqueName: \"kubernetes.io/projected/eeb23f64-61db-4965-8018-86c8aa5a8c80-kube-api-access-kvvpk\") pod \"kube-proxy-qzg85\" (UID: \"eeb23f64-61db-4965-8018-86c8aa5a8c80\") " pod="kube-system/kube-proxy-qzg85" Oct 30 00:09:34.569126 containerd[1578]: time="2025-10-30T00:09:34.568679328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qzg85,Uid:eeb23f64-61db-4965-8018-86c8aa5a8c80,Namespace:kube-system,Attempt:0,}" Oct 30 00:09:34.618958 containerd[1578]: time="2025-10-30T00:09:34.618874902Z" level=info msg="connecting to shim 87261ddc39ce8fe618f244eaa5d3505048043161e7f8e6def2f033df7b0ed76f" address="unix:///run/containerd/s/9c8ba2e9163edc8726ea135ab2f1852b85fbbb6a599af0e5b9f9aaa416aee41c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:34.699562 systemd[1]: Started cri-containerd-87261ddc39ce8fe618f244eaa5d3505048043161e7f8e6def2f033df7b0ed76f.scope - libcontainer container 87261ddc39ce8fe618f244eaa5d3505048043161e7f8e6def2f033df7b0ed76f. Oct 30 00:09:34.809951 systemd[1]: Created slice kubepods-besteffort-pod86a05b05_b001_4a8c_90b2_7535c1a3de23.slice - libcontainer container kubepods-besteffort-pod86a05b05_b001_4a8c_90b2_7535c1a3de23.slice. Oct 30 00:09:34.852678 containerd[1578]: time="2025-10-30T00:09:34.852456904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qzg85,Uid:eeb23f64-61db-4965-8018-86c8aa5a8c80,Namespace:kube-system,Attempt:0,} returns sandbox id \"87261ddc39ce8fe618f244eaa5d3505048043161e7f8e6def2f033df7b0ed76f\"" Oct 30 00:09:34.861337 containerd[1578]: time="2025-10-30T00:09:34.861280126Z" level=info msg="CreateContainer within sandbox \"87261ddc39ce8fe618f244eaa5d3505048043161e7f8e6def2f033df7b0ed76f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:09:34.865348 kubelet[2876]: I1030 00:09:34.865292 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86a05b05-b001-4a8c-90b2-7535c1a3de23-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kwwvd\" (UID: \"86a05b05-b001-4a8c-90b2-7535c1a3de23\") " pod="tigera-operator/tigera-operator-7dcd859c48-kwwvd" Oct 30 00:09:34.865348 kubelet[2876]: I1030 00:09:34.865349 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dstwr\" (UniqueName: \"kubernetes.io/projected/86a05b05-b001-4a8c-90b2-7535c1a3de23-kube-api-access-dstwr\") pod \"tigera-operator-7dcd859c48-kwwvd\" (UID: \"86a05b05-b001-4a8c-90b2-7535c1a3de23\") " pod="tigera-operator/tigera-operator-7dcd859c48-kwwvd" Oct 30 00:09:34.880244 containerd[1578]: time="2025-10-30T00:09:34.880186059Z" level=info msg="Container e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:34.891457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420984617.mount: Deactivated successfully. Oct 30 00:09:34.901525 containerd[1578]: time="2025-10-30T00:09:34.901447451Z" level=info msg="CreateContainer within sandbox \"87261ddc39ce8fe618f244eaa5d3505048043161e7f8e6def2f033df7b0ed76f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca\"" Oct 30 00:09:34.902351 containerd[1578]: time="2025-10-30T00:09:34.902257710Z" level=info msg="StartContainer for \"e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca\"" Oct 30 00:09:34.905494 containerd[1578]: time="2025-10-30T00:09:34.905224732Z" level=info msg="connecting to shim e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca" address="unix:///run/containerd/s/9c8ba2e9163edc8726ea135ab2f1852b85fbbb6a599af0e5b9f9aaa416aee41c" protocol=ttrpc version=3 Oct 30 00:09:34.937586 systemd[1]: Started cri-containerd-e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca.scope - libcontainer container e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca. Oct 30 00:09:35.019374 containerd[1578]: time="2025-10-30T00:09:35.019293427Z" level=info msg="StartContainer for \"e96f5d6de138e08cc90048f0fc335f70e47136dcffa8ce21005e2d0f315603ca\" returns successfully" Oct 30 00:09:35.118219 containerd[1578]: time="2025-10-30T00:09:35.118059920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kwwvd,Uid:86a05b05-b001-4a8c-90b2-7535c1a3de23,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:09:35.158840 containerd[1578]: time="2025-10-30T00:09:35.158612955Z" level=info msg="connecting to shim 01debe8073c608ec24815e989beb4171d3c4eca55d17f0b354c2655b1a658e3a" address="unix:///run/containerd/s/95c2dea51a20144f4775f61f387fce050a52ebb1bd3c5d6bb160f2aefebab851" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:35.221662 systemd[1]: Started cri-containerd-01debe8073c608ec24815e989beb4171d3c4eca55d17f0b354c2655b1a658e3a.scope - libcontainer container 01debe8073c608ec24815e989beb4171d3c4eca55d17f0b354c2655b1a658e3a. Oct 30 00:09:35.351996 containerd[1578]: time="2025-10-30T00:09:35.351460799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kwwvd,Uid:86a05b05-b001-4a8c-90b2-7535c1a3de23,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"01debe8073c608ec24815e989beb4171d3c4eca55d17f0b354c2655b1a658e3a\"" Oct 30 00:09:35.356076 containerd[1578]: time="2025-10-30T00:09:35.355492054Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:09:36.874448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787784269.mount: Deactivated successfully. Oct 30 00:09:37.899645 containerd[1578]: time="2025-10-30T00:09:37.899556563Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:37.901299 containerd[1578]: time="2025-10-30T00:09:37.901051841Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:09:37.902948 containerd[1578]: time="2025-10-30T00:09:37.902897465Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:37.908358 containerd[1578]: time="2025-10-30T00:09:37.908291114Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:37.910029 containerd[1578]: time="2025-10-30T00:09:37.909754133Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.554211302s" Oct 30 00:09:37.910029 containerd[1578]: time="2025-10-30T00:09:37.909833068Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:09:37.922200 containerd[1578]: time="2025-10-30T00:09:37.922146695Z" level=info msg="CreateContainer within sandbox \"01debe8073c608ec24815e989beb4171d3c4eca55d17f0b354c2655b1a658e3a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:09:37.938039 containerd[1578]: time="2025-10-30T00:09:37.935474913Z" level=info msg="Container 25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:37.954453 containerd[1578]: time="2025-10-30T00:09:37.954386922Z" level=info msg="CreateContainer within sandbox \"01debe8073c608ec24815e989beb4171d3c4eca55d17f0b354c2655b1a658e3a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e\"" Oct 30 00:09:37.957322 containerd[1578]: time="2025-10-30T00:09:37.957088342Z" level=info msg="StartContainer for \"25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e\"" Oct 30 00:09:37.959019 containerd[1578]: time="2025-10-30T00:09:37.958963052Z" level=info msg="connecting to shim 25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e" address="unix:///run/containerd/s/95c2dea51a20144f4775f61f387fce050a52ebb1bd3c5d6bb160f2aefebab851" protocol=ttrpc version=3 Oct 30 00:09:38.001426 systemd[1]: Started cri-containerd-25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e.scope - libcontainer container 25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e. Oct 30 00:09:38.051133 containerd[1578]: time="2025-10-30T00:09:38.051080358Z" level=info msg="StartContainer for \"25f0a33b830a8bba562949c9445b3e1b34871ce32cb0a4968806d8d8b3dbb96e\" returns successfully" Oct 30 00:09:38.157136 kubelet[2876]: I1030 00:09:38.155572 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qzg85" podStartSLOduration=4.155545261 podStartE2EDuration="4.155545261s" podCreationTimestamp="2025-10-30 00:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:09:35.162485833 +0000 UTC m=+6.503618145" watchObservedRunningTime="2025-10-30 00:09:38.155545261 +0000 UTC m=+9.496677572" Oct 30 00:09:38.159393 kubelet[2876]: I1030 00:09:38.158833 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kwwvd" podStartSLOduration=1.602007962 podStartE2EDuration="4.158812265s" podCreationTimestamp="2025-10-30 00:09:34 +0000 UTC" firstStartedPulling="2025-10-30 00:09:35.354985405 +0000 UTC m=+6.696117692" lastFinishedPulling="2025-10-30 00:09:37.911789696 +0000 UTC m=+9.252921995" observedRunningTime="2025-10-30 00:09:38.158692399 +0000 UTC m=+9.499824713" watchObservedRunningTime="2025-10-30 00:09:38.158812265 +0000 UTC m=+9.499944578" Oct 30 00:09:45.654626 sudo[1905]: pam_unix(sudo:session): session closed for user root Oct 30 00:09:45.704492 sshd[1904]: Connection closed by 139.178.89.65 port 46044 Oct 30 00:09:45.705523 sshd-session[1901]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:45.718200 systemd[1]: sshd@8-10.128.0.23:22-139.178.89.65:46044.service: Deactivated successfully. Oct 30 00:09:45.719571 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:09:45.731216 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:09:45.731550 systemd[1]: session-9.scope: Consumed 8.677s CPU time, 233.8M memory peak. Oct 30 00:09:45.748273 systemd-logind[1548]: Removed session 9. Oct 30 00:09:52.567281 systemd[1]: Created slice kubepods-besteffort-poddf679e02_b855_407a_bf71_8f2c8e779359.slice - libcontainer container kubepods-besteffort-poddf679e02_b855_407a_bf71_8f2c8e779359.slice. Oct 30 00:09:52.595784 kubelet[2876]: I1030 00:09:52.595467 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdg44\" (UniqueName: \"kubernetes.io/projected/df679e02-b855-407a-bf71-8f2c8e779359-kube-api-access-zdg44\") pod \"calico-typha-b9b86d544-c2zfs\" (UID: \"df679e02-b855-407a-bf71-8f2c8e779359\") " pod="calico-system/calico-typha-b9b86d544-c2zfs" Oct 30 00:09:52.595784 kubelet[2876]: I1030 00:09:52.595557 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df679e02-b855-407a-bf71-8f2c8e779359-tigera-ca-bundle\") pod \"calico-typha-b9b86d544-c2zfs\" (UID: \"df679e02-b855-407a-bf71-8f2c8e779359\") " pod="calico-system/calico-typha-b9b86d544-c2zfs" Oct 30 00:09:52.595784 kubelet[2876]: I1030 00:09:52.595611 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/df679e02-b855-407a-bf71-8f2c8e779359-typha-certs\") pod \"calico-typha-b9b86d544-c2zfs\" (UID: \"df679e02-b855-407a-bf71-8f2c8e779359\") " pod="calico-system/calico-typha-b9b86d544-c2zfs" Oct 30 00:09:52.800856 kubelet[2876]: I1030 00:09:52.799184 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-var-run-calico\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.800856 kubelet[2876]: I1030 00:09:52.799247 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-cni-net-dir\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.800856 kubelet[2876]: I1030 00:09:52.799282 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-lib-modules\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.800856 kubelet[2876]: I1030 00:09:52.799312 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-policysync\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.800856 kubelet[2876]: I1030 00:09:52.799339 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-xtables-lock\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801307 kubelet[2876]: I1030 00:09:52.799373 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-flexvol-driver-host\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801307 kubelet[2876]: I1030 00:09:52.799410 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5c2e6137-869f-4a5b-924e-5dbb652833bb-node-certs\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801307 kubelet[2876]: I1030 00:09:52.799443 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-cni-log-dir\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801307 kubelet[2876]: I1030 00:09:52.799497 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mjjb\" (UniqueName: \"kubernetes.io/projected/5c2e6137-869f-4a5b-924e-5dbb652833bb-kube-api-access-9mjjb\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801307 kubelet[2876]: I1030 00:09:52.799529 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2e6137-869f-4a5b-924e-5dbb652833bb-tigera-ca-bundle\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801596 kubelet[2876]: I1030 00:09:52.799562 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-var-lib-calico\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.801596 kubelet[2876]: I1030 00:09:52.799598 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5c2e6137-869f-4a5b-924e-5dbb652833bb-cni-bin-dir\") pod \"calico-node-bq75t\" (UID: \"5c2e6137-869f-4a5b-924e-5dbb652833bb\") " pod="calico-system/calico-node-bq75t" Oct 30 00:09:52.802530 systemd[1]: Created slice kubepods-besteffort-pod5c2e6137_869f_4a5b_924e_5dbb652833bb.slice - libcontainer container kubepods-besteffort-pod5c2e6137_869f_4a5b_924e_5dbb652833bb.slice. Oct 30 00:09:52.874696 containerd[1578]: time="2025-10-30T00:09:52.874260676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b9b86d544-c2zfs,Uid:df679e02-b855-407a-bf71-8f2c8e779359,Namespace:calico-system,Attempt:0,}" Oct 30 00:09:52.920913 kubelet[2876]: E1030 00:09:52.920689 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:52.920913 kubelet[2876]: W1030 00:09:52.920725 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:52.920913 kubelet[2876]: E1030 00:09:52.920759 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:52.945113 containerd[1578]: time="2025-10-30T00:09:52.944067007Z" level=info msg="connecting to shim 8c02ef036fc25b030c3390eb6c70547e7a753579f2b7d9785fa28a4618a4e671" address="unix:///run/containerd/s/2b8511930703b9a0195afbbdf1ad522384967db14a551e95fe3f74b92b43dbd0" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:52.945571 kubelet[2876]: E1030 00:09:52.943993 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:52.946386 kubelet[2876]: W1030 00:09:52.945656 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:52.946386 kubelet[2876]: E1030 00:09:52.945707 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:52.967701 kubelet[2876]: E1030 00:09:52.966712 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:52.967701 kubelet[2876]: W1030 00:09:52.966744 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:52.967701 kubelet[2876]: E1030 00:09:52.966959 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.037341 systemd[1]: Started cri-containerd-8c02ef036fc25b030c3390eb6c70547e7a753579f2b7d9785fa28a4618a4e671.scope - libcontainer container 8c02ef036fc25b030c3390eb6c70547e7a753579f2b7d9785fa28a4618a4e671. Oct 30 00:09:53.073444 kubelet[2876]: E1030 00:09:53.073383 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:09:53.095661 kubelet[2876]: E1030 00:09:53.095618 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.095661 kubelet[2876]: W1030 00:09:53.095655 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.096287 kubelet[2876]: E1030 00:09:53.095687 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.096287 kubelet[2876]: E1030 00:09:53.096060 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.096287 kubelet[2876]: W1030 00:09:53.096074 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.096287 kubelet[2876]: E1030 00:09:53.096092 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.098288 kubelet[2876]: E1030 00:09:53.098259 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.098288 kubelet[2876]: W1030 00:09:53.098284 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.098461 kubelet[2876]: E1030 00:09:53.098305 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.098740 kubelet[2876]: E1030 00:09:53.098716 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.098740 kubelet[2876]: W1030 00:09:53.098740 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.098896 kubelet[2876]: E1030 00:09:53.098757 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.099173 kubelet[2876]: E1030 00:09:53.099150 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.099173 kubelet[2876]: W1030 00:09:53.099172 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.099337 kubelet[2876]: E1030 00:09:53.099190 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.099893 kubelet[2876]: E1030 00:09:53.099867 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.099893 kubelet[2876]: W1030 00:09:53.099891 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.100685 kubelet[2876]: E1030 00:09:53.099909 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.100685 kubelet[2876]: E1030 00:09:53.100573 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.100685 kubelet[2876]: W1030 00:09:53.100590 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.100685 kubelet[2876]: E1030 00:09:53.100607 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.102153 kubelet[2876]: E1030 00:09:53.102124 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.102153 kubelet[2876]: W1030 00:09:53.102150 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.102334 kubelet[2876]: E1030 00:09:53.102169 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.102536 kubelet[2876]: E1030 00:09:53.102514 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.102619 kubelet[2876]: W1030 00:09:53.102544 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.102619 kubelet[2876]: E1030 00:09:53.102562 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.102916 kubelet[2876]: E1030 00:09:53.102875 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.102916 kubelet[2876]: W1030 00:09:53.102915 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.103084 kubelet[2876]: E1030 00:09:53.102932 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.103391 kubelet[2876]: E1030 00:09:53.103364 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.103391 kubelet[2876]: W1030 00:09:53.103388 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.103538 kubelet[2876]: E1030 00:09:53.103514 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.104271 kubelet[2876]: E1030 00:09:53.104244 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.104271 kubelet[2876]: W1030 00:09:53.104268 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.104271 kubelet[2876]: E1030 00:09:53.104287 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.105759 kubelet[2876]: E1030 00:09:53.105735 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.106303 kubelet[2876]: W1030 00:09:53.105989 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.106303 kubelet[2876]: E1030 00:09:53.106035 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.106828 kubelet[2876]: E1030 00:09:53.106763 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.106828 kubelet[2876]: W1030 00:09:53.106788 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.106828 kubelet[2876]: E1030 00:09:53.106814 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.107637 kubelet[2876]: E1030 00:09:53.107584 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.107749 kubelet[2876]: W1030 00:09:53.107608 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.107749 kubelet[2876]: E1030 00:09:53.107667 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.109317 kubelet[2876]: E1030 00:09:53.109291 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.109317 kubelet[2876]: W1030 00:09:53.109315 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.109483 kubelet[2876]: E1030 00:09:53.109334 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.111620 kubelet[2876]: E1030 00:09:53.111591 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.111745 kubelet[2876]: W1030 00:09:53.111625 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.111745 kubelet[2876]: E1030 00:09:53.111645 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.112159 kubelet[2876]: E1030 00:09:53.111964 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.112159 kubelet[2876]: W1030 00:09:53.111982 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.112159 kubelet[2876]: E1030 00:09:53.112000 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.112995 kubelet[2876]: E1030 00:09:53.112968 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.112995 kubelet[2876]: W1030 00:09:53.112995 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.113265 kubelet[2876]: E1030 00:09:53.113182 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.115253 kubelet[2876]: E1030 00:09:53.115218 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.115253 kubelet[2876]: W1030 00:09:53.115246 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.115411 kubelet[2876]: E1030 00:09:53.115275 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.116967 kubelet[2876]: E1030 00:09:53.116934 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.116967 kubelet[2876]: W1030 00:09:53.116963 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.117243 kubelet[2876]: E1030 00:09:53.116984 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.117566 kubelet[2876]: I1030 00:09:53.117530 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0987c36-521e-441e-a4df-01b4de4064f7-registration-dir\") pod \"csi-node-driver-vsw6q\" (UID: \"c0987c36-521e-441e-a4df-01b4de4064f7\") " pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:09:53.118341 containerd[1578]: time="2025-10-30T00:09:53.118275982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bq75t,Uid:5c2e6137-869f-4a5b-924e-5dbb652833bb,Namespace:calico-system,Attempt:0,}" Oct 30 00:09:53.121685 kubelet[2876]: E1030 00:09:53.121636 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.121685 kubelet[2876]: W1030 00:09:53.121672 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.121868 kubelet[2876]: E1030 00:09:53.121698 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.121868 kubelet[2876]: I1030 00:09:53.121736 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8f55\" (UniqueName: \"kubernetes.io/projected/c0987c36-521e-441e-a4df-01b4de4064f7-kube-api-access-v8f55\") pod \"csi-node-driver-vsw6q\" (UID: \"c0987c36-521e-441e-a4df-01b4de4064f7\") " pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:09:53.123040 kubelet[2876]: E1030 00:09:53.122415 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.123040 kubelet[2876]: W1030 00:09:53.122440 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.123040 kubelet[2876]: E1030 00:09:53.122490 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.123040 kubelet[2876]: I1030 00:09:53.122528 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0987c36-521e-441e-a4df-01b4de4064f7-varrun\") pod \"csi-node-driver-vsw6q\" (UID: \"c0987c36-521e-441e-a4df-01b4de4064f7\") " pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:09:53.124158 kubelet[2876]: E1030 00:09:53.123973 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.124158 kubelet[2876]: W1030 00:09:53.123995 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.124964 kubelet[2876]: E1030 00:09:53.124320 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.124964 kubelet[2876]: I1030 00:09:53.124366 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0987c36-521e-441e-a4df-01b4de4064f7-kubelet-dir\") pod \"csi-node-driver-vsw6q\" (UID: \"c0987c36-521e-441e-a4df-01b4de4064f7\") " pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:09:53.127539 kubelet[2876]: E1030 00:09:53.127473 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.127539 kubelet[2876]: W1030 00:09:53.127506 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.129190 kubelet[2876]: E1030 00:09:53.127530 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.129190 kubelet[2876]: I1030 00:09:53.127582 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0987c36-521e-441e-a4df-01b4de4064f7-socket-dir\") pod \"csi-node-driver-vsw6q\" (UID: \"c0987c36-521e-441e-a4df-01b4de4064f7\") " pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:09:53.131759 kubelet[2876]: E1030 00:09:53.131731 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.131759 kubelet[2876]: W1030 00:09:53.131760 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.133070 kubelet[2876]: E1030 00:09:53.131784 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.133426 kubelet[2876]: E1030 00:09:53.133404 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.133543 kubelet[2876]: W1030 00:09:53.133426 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.133543 kubelet[2876]: E1030 00:09:53.133448 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.134001 kubelet[2876]: E1030 00:09:53.133976 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.134001 kubelet[2876]: W1030 00:09:53.134002 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.134001 kubelet[2876]: E1030 00:09:53.134037 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.136710 kubelet[2876]: E1030 00:09:53.136594 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.137808 kubelet[2876]: W1030 00:09:53.137763 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.138583 kubelet[2876]: E1030 00:09:53.137904 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.139744 kubelet[2876]: E1030 00:09:53.139180 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.139744 kubelet[2876]: W1030 00:09:53.139201 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.139744 kubelet[2876]: E1030 00:09:53.139225 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.141004 kubelet[2876]: E1030 00:09:53.140477 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.141004 kubelet[2876]: W1030 00:09:53.140613 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.141004 kubelet[2876]: E1030 00:09:53.140643 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.143212 kubelet[2876]: E1030 00:09:53.143190 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.143563 kubelet[2876]: W1030 00:09:53.143339 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.143563 kubelet[2876]: E1030 00:09:53.143406 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.144483 kubelet[2876]: E1030 00:09:53.144223 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.144483 kubelet[2876]: W1030 00:09:53.144250 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.144483 kubelet[2876]: E1030 00:09:53.144268 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.147034 kubelet[2876]: E1030 00:09:53.146083 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.147034 kubelet[2876]: W1030 00:09:53.146144 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.147034 kubelet[2876]: E1030 00:09:53.146167 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.147675 kubelet[2876]: E1030 00:09:53.147601 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.147675 kubelet[2876]: W1030 00:09:53.147623 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.147675 kubelet[2876]: E1030 00:09:53.147641 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.175714 containerd[1578]: time="2025-10-30T00:09:53.175648005Z" level=info msg="connecting to shim b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e" address="unix:///run/containerd/s/2a636a2c3ab49aac23191d27b1a43390d0e996eafd258c8586ca50ef45fb1f71" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:09:53.230229 kubelet[2876]: E1030 00:09:53.230158 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.230852 kubelet[2876]: W1030 00:09:53.230196 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.230852 kubelet[2876]: E1030 00:09:53.230642 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.234323 systemd[1]: Started cri-containerd-b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e.scope - libcontainer container b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e. Oct 30 00:09:53.236527 kubelet[2876]: E1030 00:09:53.236403 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.236527 kubelet[2876]: W1030 00:09:53.236455 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.236527 kubelet[2876]: E1030 00:09:53.236483 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.239204 kubelet[2876]: E1030 00:09:53.239178 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.239398 kubelet[2876]: W1030 00:09:53.239256 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.239398 kubelet[2876]: E1030 00:09:53.239285 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.240405 kubelet[2876]: E1030 00:09:53.240357 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.241210 kubelet[2876]: W1030 00:09:53.240679 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.241210 kubelet[2876]: E1030 00:09:53.240720 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.243094 kubelet[2876]: E1030 00:09:53.242539 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.243094 kubelet[2876]: W1030 00:09:53.243040 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.243448 kubelet[2876]: E1030 00:09:53.243070 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.246120 kubelet[2876]: E1030 00:09:53.245955 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.246120 kubelet[2876]: W1030 00:09:53.245981 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.246775 kubelet[2876]: E1030 00:09:53.246005 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.249190 kubelet[2876]: E1030 00:09:53.249103 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.249190 kubelet[2876]: W1030 00:09:53.249126 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.249190 kubelet[2876]: E1030 00:09:53.249163 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.251446 kubelet[2876]: E1030 00:09:53.251288 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.252103 kubelet[2876]: W1030 00:09:53.251654 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.252103 kubelet[2876]: E1030 00:09:53.251685 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.253675 kubelet[2876]: E1030 00:09:53.253608 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.253675 kubelet[2876]: W1030 00:09:53.253641 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.253675 kubelet[2876]: E1030 00:09:53.253663 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.254321 kubelet[2876]: E1030 00:09:53.254265 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.254321 kubelet[2876]: W1030 00:09:53.254291 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.254321 kubelet[2876]: E1030 00:09:53.254312 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.255778 kubelet[2876]: E1030 00:09:53.255185 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.255778 kubelet[2876]: W1030 00:09:53.255203 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.255778 kubelet[2876]: E1030 00:09:53.255225 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.255778 kubelet[2876]: E1030 00:09:53.255594 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.255778 kubelet[2876]: W1030 00:09:53.255608 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.255778 kubelet[2876]: E1030 00:09:53.255625 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.256658 kubelet[2876]: E1030 00:09:53.256027 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.256658 kubelet[2876]: W1030 00:09:53.256060 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.256658 kubelet[2876]: E1030 00:09:53.256076 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.256658 kubelet[2876]: E1030 00:09:53.256413 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.256658 kubelet[2876]: W1030 00:09:53.256426 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.256658 kubelet[2876]: E1030 00:09:53.256443 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.257732 kubelet[2876]: E1030 00:09:53.257444 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.257732 kubelet[2876]: W1030 00:09:53.257460 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.257732 kubelet[2876]: E1030 00:09:53.257478 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.258509 kubelet[2876]: E1030 00:09:53.258128 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.258509 kubelet[2876]: W1030 00:09:53.258146 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.258509 kubelet[2876]: E1030 00:09:53.258163 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.258722 kubelet[2876]: E1030 00:09:53.258630 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.258722 kubelet[2876]: W1030 00:09:53.258645 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.258722 kubelet[2876]: E1030 00:09:53.258663 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.260299 kubelet[2876]: E1030 00:09:53.260269 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.260299 kubelet[2876]: W1030 00:09:53.260299 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.260471 kubelet[2876]: E1030 00:09:53.260318 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.260753 kubelet[2876]: E1030 00:09:53.260702 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.260753 kubelet[2876]: W1030 00:09:53.260724 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.260753 kubelet[2876]: E1030 00:09:53.260742 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.261556 kubelet[2876]: E1030 00:09:53.261427 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.261556 kubelet[2876]: W1030 00:09:53.261453 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.261556 kubelet[2876]: E1030 00:09:53.261472 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.262707 kubelet[2876]: E1030 00:09:53.261800 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.262707 kubelet[2876]: W1030 00:09:53.261814 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.262707 kubelet[2876]: E1030 00:09:53.261831 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.262707 kubelet[2876]: E1030 00:09:53.262278 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.262707 kubelet[2876]: W1030 00:09:53.262292 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.262707 kubelet[2876]: E1030 00:09:53.262308 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.262707 kubelet[2876]: E1030 00:09:53.262675 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.262707 kubelet[2876]: W1030 00:09:53.262689 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.262707 kubelet[2876]: E1030 00:09:53.262707 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.265171 kubelet[2876]: E1030 00:09:53.263245 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.265171 kubelet[2876]: W1030 00:09:53.263260 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.265171 kubelet[2876]: E1030 00:09:53.263277 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.265171 kubelet[2876]: E1030 00:09:53.263667 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.265171 kubelet[2876]: W1030 00:09:53.263683 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.265171 kubelet[2876]: E1030 00:09:53.263700 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.310346 kubelet[2876]: E1030 00:09:53.309885 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:09:53.310346 kubelet[2876]: W1030 00:09:53.309914 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:09:53.310346 kubelet[2876]: E1030 00:09:53.309943 2876 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:09:53.411460 containerd[1578]: time="2025-10-30T00:09:53.411277452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bq75t,Uid:5c2e6137-869f-4a5b-924e-5dbb652833bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\"" Oct 30 00:09:53.416654 containerd[1578]: time="2025-10-30T00:09:53.416171241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:09:53.429448 containerd[1578]: time="2025-10-30T00:09:53.429397259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b9b86d544-c2zfs,Uid:df679e02-b855-407a-bf71-8f2c8e779359,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c02ef036fc25b030c3390eb6c70547e7a753579f2b7d9785fa28a4618a4e671\"" Oct 30 00:09:54.412045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998428606.mount: Deactivated successfully. Oct 30 00:09:54.557349 containerd[1578]: time="2025-10-30T00:09:54.557283251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:54.559620 containerd[1578]: time="2025-10-30T00:09:54.559122607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Oct 30 00:09:54.561890 containerd[1578]: time="2025-10-30T00:09:54.561274236Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:54.566506 containerd[1578]: time="2025-10-30T00:09:54.566462381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:54.568624 containerd[1578]: time="2025-10-30T00:09:54.568547387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.151643998s" Oct 30 00:09:54.568764 containerd[1578]: time="2025-10-30T00:09:54.568638222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:09:54.570860 containerd[1578]: time="2025-10-30T00:09:54.570183848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:09:54.574914 containerd[1578]: time="2025-10-30T00:09:54.574850818Z" level=info msg="CreateContainer within sandbox \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:09:54.589270 containerd[1578]: time="2025-10-30T00:09:54.589220224Z" level=info msg="Container 76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:54.609857 containerd[1578]: time="2025-10-30T00:09:54.609767509Z" level=info msg="CreateContainer within sandbox \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\"" Oct 30 00:09:54.611608 containerd[1578]: time="2025-10-30T00:09:54.611538919Z" level=info msg="StartContainer for \"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\"" Oct 30 00:09:54.615418 containerd[1578]: time="2025-10-30T00:09:54.615266070Z" level=info msg="connecting to shim 76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4" address="unix:///run/containerd/s/2a636a2c3ab49aac23191d27b1a43390d0e996eafd258c8586ca50ef45fb1f71" protocol=ttrpc version=3 Oct 30 00:09:54.667584 systemd[1]: Started cri-containerd-76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4.scope - libcontainer container 76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4. Oct 30 00:09:54.741922 containerd[1578]: time="2025-10-30T00:09:54.741850308Z" level=info msg="StartContainer for \"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\" returns successfully" Oct 30 00:09:54.759642 systemd[1]: cri-containerd-76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4.scope: Deactivated successfully. Oct 30 00:09:54.765345 containerd[1578]: time="2025-10-30T00:09:54.765284929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\" id:\"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\" pid:3476 exited_at:{seconds:1761782994 nanos:764543378}" Oct 30 00:09:54.765717 containerd[1578]: time="2025-10-30T00:09:54.765665479Z" level=info msg="received exit event container_id:\"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\" id:\"76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4\" pid:3476 exited_at:{seconds:1761782994 nanos:764543378}" Oct 30 00:09:54.810512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76d2f961b3bc2a68075a991bc9b8662e166c2b60871f579e30c196286dee98f4-rootfs.mount: Deactivated successfully. Oct 30 00:09:55.055057 kubelet[2876]: E1030 00:09:55.054101 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:09:57.054764 kubelet[2876]: E1030 00:09:57.054667 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:09:57.786432 containerd[1578]: time="2025-10-30T00:09:57.786356200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:57.788368 containerd[1578]: time="2025-10-30T00:09:57.788299535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Oct 30 00:09:57.789927 containerd[1578]: time="2025-10-30T00:09:57.789872923Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:57.794208 containerd[1578]: time="2025-10-30T00:09:57.794125126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:09:57.796403 containerd[1578]: time="2025-10-30T00:09:57.796275581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.226046327s" Oct 30 00:09:57.796403 containerd[1578]: time="2025-10-30T00:09:57.796338981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:09:57.800624 containerd[1578]: time="2025-10-30T00:09:57.800244277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:09:57.830157 containerd[1578]: time="2025-10-30T00:09:57.830092563Z" level=info msg="CreateContainer within sandbox \"8c02ef036fc25b030c3390eb6c70547e7a753579f2b7d9785fa28a4618a4e671\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:09:57.844052 containerd[1578]: time="2025-10-30T00:09:57.841660539Z" level=info msg="Container 1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:09:57.863359 containerd[1578]: time="2025-10-30T00:09:57.863302228Z" level=info msg="CreateContainer within sandbox \"8c02ef036fc25b030c3390eb6c70547e7a753579f2b7d9785fa28a4618a4e671\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347\"" Oct 30 00:09:57.865214 containerd[1578]: time="2025-10-30T00:09:57.865164771Z" level=info msg="StartContainer for \"1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347\"" Oct 30 00:09:57.868253 containerd[1578]: time="2025-10-30T00:09:57.868185456Z" level=info msg="connecting to shim 1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347" address="unix:///run/containerd/s/2b8511930703b9a0195afbbdf1ad522384967db14a551e95fe3f74b92b43dbd0" protocol=ttrpc version=3 Oct 30 00:09:57.906317 systemd[1]: Started cri-containerd-1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347.scope - libcontainer container 1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347. Oct 30 00:09:58.000530 containerd[1578]: time="2025-10-30T00:09:58.000482145Z" level=info msg="StartContainer for \"1627625420918b490580c62f9f95720673f2f8ce97f072590e1bcf9e1d413347\" returns successfully" Oct 30 00:09:58.302959 kubelet[2876]: I1030 00:09:58.302877 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b9b86d544-c2zfs" podStartSLOduration=1.9367299550000001 podStartE2EDuration="6.302851822s" podCreationTimestamp="2025-10-30 00:09:52 +0000 UTC" firstStartedPulling="2025-10-30 00:09:53.431420022 +0000 UTC m=+24.772552325" lastFinishedPulling="2025-10-30 00:09:57.797541884 +0000 UTC m=+29.138674192" observedRunningTime="2025-10-30 00:09:58.302581099 +0000 UTC m=+29.643713412" watchObservedRunningTime="2025-10-30 00:09:58.302851822 +0000 UTC m=+29.643984135" Oct 30 00:09:59.057042 kubelet[2876]: E1030 00:09:59.056472 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:01.053927 kubelet[2876]: E1030 00:10:01.053858 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:01.399053 containerd[1578]: time="2025-10-30T00:10:01.398838034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:10:01.400675 containerd[1578]: time="2025-10-30T00:10:01.400619897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:10:01.402055 containerd[1578]: time="2025-10-30T00:10:01.401647115Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:10:01.408062 containerd[1578]: time="2025-10-30T00:10:01.407394162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:10:01.408810 containerd[1578]: time="2025-10-30T00:10:01.408762120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.608465016s" Oct 30 00:10:01.408993 containerd[1578]: time="2025-10-30T00:10:01.408965367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:10:01.416174 containerd[1578]: time="2025-10-30T00:10:01.416120552Z" level=info msg="CreateContainer within sandbox \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:10:01.433252 containerd[1578]: time="2025-10-30T00:10:01.433196372Z" level=info msg="Container 9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:10:01.450610 containerd[1578]: time="2025-10-30T00:10:01.450542090Z" level=info msg="CreateContainer within sandbox \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\"" Oct 30 00:10:01.453454 containerd[1578]: time="2025-10-30T00:10:01.453390053Z" level=info msg="StartContainer for \"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\"" Oct 30 00:10:01.463744 containerd[1578]: time="2025-10-30T00:10:01.463594722Z" level=info msg="connecting to shim 9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05" address="unix:///run/containerd/s/2a636a2c3ab49aac23191d27b1a43390d0e996eafd258c8586ca50ef45fb1f71" protocol=ttrpc version=3 Oct 30 00:10:01.510394 systemd[1]: Started cri-containerd-9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05.scope - libcontainer container 9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05. Oct 30 00:10:01.593376 containerd[1578]: time="2025-10-30T00:10:01.593296086Z" level=info msg="StartContainer for \"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\" returns successfully" Oct 30 00:10:02.665932 containerd[1578]: time="2025-10-30T00:10:02.665090530Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:10:02.670927 systemd[1]: cri-containerd-9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05.scope: Deactivated successfully. Oct 30 00:10:02.672343 containerd[1578]: time="2025-10-30T00:10:02.672197105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\" id:\"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\" pid:3573 exited_at:{seconds:1761783002 nanos:671415595}" Oct 30 00:10:02.672396 systemd[1]: cri-containerd-9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05.scope: Consumed 754ms CPU time, 190.6M memory peak, 171.3M written to disk. Oct 30 00:10:02.673101 containerd[1578]: time="2025-10-30T00:10:02.672978843Z" level=info msg="received exit event container_id:\"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\" id:\"9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05\" pid:3573 exited_at:{seconds:1761783002 nanos:671415595}" Oct 30 00:10:02.719728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9497765bea220e034f502e9cf06c75a01a7f7dca6cd02761abd61e60c1dced05-rootfs.mount: Deactivated successfully. Oct 30 00:10:02.724152 kubelet[2876]: I1030 00:10:02.724099 2876 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 00:10:03.053100 systemd[1]: Created slice kubepods-besteffort-pod143027e7_a13c_4c0d_bf53_591d4038e751.slice - libcontainer container kubepods-besteffort-pod143027e7_a13c_4c0d_bf53_591d4038e751.slice. Oct 30 00:10:03.149523 kubelet[2876]: I1030 00:10:03.149434 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/143027e7-a13c-4c0d-bf53-591d4038e751-calico-apiserver-certs\") pod \"calico-apiserver-f747677c9-846mn\" (UID: \"143027e7-a13c-4c0d-bf53-591d4038e751\") " pod="calico-apiserver/calico-apiserver-f747677c9-846mn" Oct 30 00:10:03.149807 kubelet[2876]: I1030 00:10:03.149535 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpn7n\" (UniqueName: \"kubernetes.io/projected/143027e7-a13c-4c0d-bf53-591d4038e751-kube-api-access-wpn7n\") pod \"calico-apiserver-f747677c9-846mn\" (UID: \"143027e7-a13c-4c0d-bf53-591d4038e751\") " pod="calico-apiserver/calico-apiserver-f747677c9-846mn" Oct 30 00:10:03.250641 kubelet[2876]: I1030 00:10:03.250321 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-backend-key-pair\") pod \"whisker-6b9f745fb4-lqnkl\" (UID: \"8d88e52b-8692-40fc-8148-df9b93ac1570\") " pod="calico-system/whisker-6b9f745fb4-lqnkl" Oct 30 00:10:03.250641 kubelet[2876]: I1030 00:10:03.250396 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-ca-bundle\") pod \"whisker-6b9f745fb4-lqnkl\" (UID: \"8d88e52b-8692-40fc-8148-df9b93ac1570\") " pod="calico-system/whisker-6b9f745fb4-lqnkl" Oct 30 00:10:03.250641 kubelet[2876]: I1030 00:10:03.250424 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvnp\" (UniqueName: \"kubernetes.io/projected/8d88e52b-8692-40fc-8148-df9b93ac1570-kube-api-access-cvvnp\") pod \"whisker-6b9f745fb4-lqnkl\" (UID: \"8d88e52b-8692-40fc-8148-df9b93ac1570\") " pod="calico-system/whisker-6b9f745fb4-lqnkl" Oct 30 00:10:03.418780 systemd[1]: Created slice kubepods-besteffort-podc0987c36_521e_441e_a4df_01b4de4064f7.slice - libcontainer container kubepods-besteffort-podc0987c36_521e_441e_a4df_01b4de4064f7.slice. Oct 30 00:10:03.563864 kubelet[2876]: I1030 00:10:03.453047 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcc8b342-9bf9-42ad-8c6d-8b01c298be9a-config-volume\") pod \"coredns-674b8bbfcf-7r8v6\" (UID: \"bcc8b342-9bf9-42ad-8c6d-8b01c298be9a\") " pod="kube-system/coredns-674b8bbfcf-7r8v6" Oct 30 00:10:03.563864 kubelet[2876]: I1030 00:10:03.453113 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgr6\" (UniqueName: \"kubernetes.io/projected/bcc8b342-9bf9-42ad-8c6d-8b01c298be9a-kube-api-access-gpgr6\") pod \"coredns-674b8bbfcf-7r8v6\" (UID: \"bcc8b342-9bf9-42ad-8c6d-8b01c298be9a\") " pod="kube-system/coredns-674b8bbfcf-7r8v6" Oct 30 00:10:03.563864 kubelet[2876]: E1030 00:10:03.554963 2876 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Oct 30 00:10:03.563864 kubelet[2876]: E1030 00:10:03.555101 2876 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc8b342-9bf9-42ad-8c6d-8b01c298be9a-config-volume podName:bcc8b342-9bf9-42ad-8c6d-8b01c298be9a nodeName:}" failed. No retries permitted until 2025-10-30 00:10:04.05506795 +0000 UTC m=+35.396200264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bcc8b342-9bf9-42ad-8c6d-8b01c298be9a-config-volume") pod "coredns-674b8bbfcf-7r8v6" (UID: "bcc8b342-9bf9-42ad-8c6d-8b01c298be9a") : object "kube-system"/"coredns" not registered Oct 30 00:10:03.430468 systemd[1]: Created slice kubepods-besteffort-pod8d88e52b_8692_40fc_8148_df9b93ac1570.slice - libcontainer container kubepods-besteffort-pod8d88e52b_8692_40fc_8148_df9b93ac1570.slice. Oct 30 00:10:03.569466 containerd[1578]: time="2025-10-30T00:10:03.568167078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vsw6q,Uid:c0987c36-521e-441e-a4df-01b4de4064f7,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:03.625901 systemd[1]: Created slice kubepods-burstable-podbcc8b342_9bf9_42ad_8c6d_8b01c298be9a.slice - libcontainer container kubepods-burstable-podbcc8b342_9bf9_42ad_8c6d_8b01c298be9a.slice. Oct 30 00:10:03.655041 kubelet[2876]: I1030 00:10:03.654476 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hht\" (UniqueName: \"kubernetes.io/projected/a484d50b-2eb4-492d-b284-def42903781b-kube-api-access-96hht\") pod \"coredns-674b8bbfcf-8kvh9\" (UID: \"a484d50b-2eb4-492d-b284-def42903781b\") " pod="kube-system/coredns-674b8bbfcf-8kvh9" Oct 30 00:10:03.655041 kubelet[2876]: I1030 00:10:03.654533 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a484d50b-2eb4-492d-b284-def42903781b-config-volume\") pod \"coredns-674b8bbfcf-8kvh9\" (UID: \"a484d50b-2eb4-492d-b284-def42903781b\") " pod="kube-system/coredns-674b8bbfcf-8kvh9" Oct 30 00:10:03.663254 containerd[1578]: time="2025-10-30T00:10:03.663205938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-846mn,Uid:143027e7-a13c-4c0d-bf53-591d4038e751,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:10:03.666183 systemd[1]: Created slice kubepods-burstable-poda484d50b_2eb4_492d_b284_def42903781b.slice - libcontainer container kubepods-burstable-poda484d50b_2eb4_492d_b284_def42903781b.slice. Oct 30 00:10:03.698346 systemd[1]: Created slice kubepods-besteffort-podf10b6c03_ea69_40d6_8304_f2729f28ebe7.slice - libcontainer container kubepods-besteffort-podf10b6c03_ea69_40d6_8304_f2729f28ebe7.slice. Oct 30 00:10:03.714557 systemd[1]: Created slice kubepods-besteffort-pod2a97c946_b833_49fb_b0be_330885d32847.slice - libcontainer container kubepods-besteffort-pod2a97c946_b833_49fb_b0be_330885d32847.slice. Oct 30 00:10:03.742067 containerd[1578]: time="2025-10-30T00:10:03.740399057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9f745fb4-lqnkl,Uid:8d88e52b-8692-40fc-8148-df9b93ac1570,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:03.757709 kubelet[2876]: I1030 00:10:03.757657 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a97c946-b833-49fb-b0be-330885d32847-config\") pod \"goldmane-666569f655-kskmh\" (UID: \"2a97c946-b833-49fb-b0be-330885d32847\") " pod="calico-system/goldmane-666569f655-kskmh" Oct 30 00:10:03.757709 kubelet[2876]: I1030 00:10:03.757723 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzdc\" (UniqueName: \"kubernetes.io/projected/2a97c946-b833-49fb-b0be-330885d32847-kube-api-access-xhzdc\") pod \"goldmane-666569f655-kskmh\" (UID: \"2a97c946-b833-49fb-b0be-330885d32847\") " pod="calico-system/goldmane-666569f655-kskmh" Oct 30 00:10:03.760964 kubelet[2876]: I1030 00:10:03.757761 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f10b6c03-ea69-40d6-8304-f2729f28ebe7-tigera-ca-bundle\") pod \"calico-kube-controllers-585ffdbd84-kh6p2\" (UID: \"f10b6c03-ea69-40d6-8304-f2729f28ebe7\") " pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" Oct 30 00:10:03.760964 kubelet[2876]: I1030 00:10:03.757798 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddmc\" (UniqueName: \"kubernetes.io/projected/c462b67e-383a-4a79-a697-1a4848277370-kube-api-access-gddmc\") pod \"calico-apiserver-f747677c9-bj47h\" (UID: \"c462b67e-383a-4a79-a697-1a4848277370\") " pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" Oct 30 00:10:03.760964 kubelet[2876]: I1030 00:10:03.757854 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vgf\" (UniqueName: \"kubernetes.io/projected/f10b6c03-ea69-40d6-8304-f2729f28ebe7-kube-api-access-k8vgf\") pod \"calico-kube-controllers-585ffdbd84-kh6p2\" (UID: \"f10b6c03-ea69-40d6-8304-f2729f28ebe7\") " pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" Oct 30 00:10:03.760964 kubelet[2876]: I1030 00:10:03.757918 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c462b67e-383a-4a79-a697-1a4848277370-calico-apiserver-certs\") pod \"calico-apiserver-f747677c9-bj47h\" (UID: \"c462b67e-383a-4a79-a697-1a4848277370\") " pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" Oct 30 00:10:03.760964 kubelet[2876]: I1030 00:10:03.758002 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a97c946-b833-49fb-b0be-330885d32847-goldmane-ca-bundle\") pod \"goldmane-666569f655-kskmh\" (UID: \"2a97c946-b833-49fb-b0be-330885d32847\") " pod="calico-system/goldmane-666569f655-kskmh" Oct 30 00:10:03.761682 kubelet[2876]: I1030 00:10:03.761172 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2a97c946-b833-49fb-b0be-330885d32847-goldmane-key-pair\") pod \"goldmane-666569f655-kskmh\" (UID: \"2a97c946-b833-49fb-b0be-330885d32847\") " pod="calico-system/goldmane-666569f655-kskmh" Oct 30 00:10:03.782931 systemd[1]: Created slice kubepods-besteffort-podc462b67e_383a_4a79_a697_1a4848277370.slice - libcontainer container kubepods-besteffort-podc462b67e_383a_4a79_a697_1a4848277370.slice. Oct 30 00:10:03.985686 containerd[1578]: time="2025-10-30T00:10:03.985627982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kvh9,Uid:a484d50b-2eb4-492d-b284-def42903781b,Namespace:kube-system,Attempt:0,}" Oct 30 00:10:04.039110 containerd[1578]: time="2025-10-30T00:10:04.038964138Z" level=error msg="Failed to destroy network for sandbox \"c42766075b77326e532efcae5bddac64718038b5b8b1bbbb7e6991dd789ed83b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.044174 containerd[1578]: time="2025-10-30T00:10:04.044099245Z" level=error msg="Failed to destroy network for sandbox \"0abf00e235e20570610118d3aff44efe0428c257f2293805ae7e14b3941d58ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.044619 containerd[1578]: time="2025-10-30T00:10:04.044110950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vsw6q,Uid:c0987c36-521e-441e-a4df-01b4de4064f7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42766075b77326e532efcae5bddac64718038b5b8b1bbbb7e6991dd789ed83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.044927 kubelet[2876]: E1030 00:10:04.044720 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42766075b77326e532efcae5bddac64718038b5b8b1bbbb7e6991dd789ed83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.045932 kubelet[2876]: E1030 00:10:04.044999 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42766075b77326e532efcae5bddac64718038b5b8b1bbbb7e6991dd789ed83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:10:04.045932 kubelet[2876]: E1030 00:10:04.045239 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42766075b77326e532efcae5bddac64718038b5b8b1bbbb7e6991dd789ed83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vsw6q" Oct 30 00:10:04.045932 kubelet[2876]: E1030 00:10:04.045398 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c42766075b77326e532efcae5bddac64718038b5b8b1bbbb7e6991dd789ed83b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:04.049521 containerd[1578]: time="2025-10-30T00:10:04.049235418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9f745fb4-lqnkl,Uid:8d88e52b-8692-40fc-8148-df9b93ac1570,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0abf00e235e20570610118d3aff44efe0428c257f2293805ae7e14b3941d58ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.052158 kubelet[2876]: E1030 00:10:04.052096 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0abf00e235e20570610118d3aff44efe0428c257f2293805ae7e14b3941d58ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.052343 kubelet[2876]: E1030 00:10:04.052188 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0abf00e235e20570610118d3aff44efe0428c257f2293805ae7e14b3941d58ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b9f745fb4-lqnkl" Oct 30 00:10:04.052343 kubelet[2876]: E1030 00:10:04.052228 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0abf00e235e20570610118d3aff44efe0428c257f2293805ae7e14b3941d58ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b9f745fb4-lqnkl" Oct 30 00:10:04.052343 kubelet[2876]: E1030 00:10:04.052305 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b9f745fb4-lqnkl_calico-system(8d88e52b-8692-40fc-8148-df9b93ac1570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b9f745fb4-lqnkl_calico-system(8d88e52b-8692-40fc-8148-df9b93ac1570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0abf00e235e20570610118d3aff44efe0428c257f2293805ae7e14b3941d58ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b9f745fb4-lqnkl" podUID="8d88e52b-8692-40fc-8148-df9b93ac1570" Oct 30 00:10:04.065365 containerd[1578]: time="2025-10-30T00:10:04.065307312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585ffdbd84-kh6p2,Uid:f10b6c03-ea69-40d6-8304-f2729f28ebe7,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:04.081779 containerd[1578]: time="2025-10-30T00:10:04.081703599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kskmh,Uid:2a97c946-b833-49fb-b0be-330885d32847,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:04.085844 containerd[1578]: time="2025-10-30T00:10:04.085646905Z" level=error msg="Failed to destroy network for sandbox \"7181afbed9bb8bc37c37bfb2ebd3ec75691206be98a84ef11f130ef4a1208cc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.090277 containerd[1578]: time="2025-10-30T00:10:04.090188817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-846mn,Uid:143027e7-a13c-4c0d-bf53-591d4038e751,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7181afbed9bb8bc37c37bfb2ebd3ec75691206be98a84ef11f130ef4a1208cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.090782 kubelet[2876]: E1030 00:10:04.090525 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7181afbed9bb8bc37c37bfb2ebd3ec75691206be98a84ef11f130ef4a1208cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.090782 kubelet[2876]: E1030 00:10:04.090622 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7181afbed9bb8bc37c37bfb2ebd3ec75691206be98a84ef11f130ef4a1208cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" Oct 30 00:10:04.090782 kubelet[2876]: E1030 00:10:04.090670 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7181afbed9bb8bc37c37bfb2ebd3ec75691206be98a84ef11f130ef4a1208cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" Oct 30 00:10:04.090997 kubelet[2876]: E1030 00:10:04.090748 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f747677c9-846mn_calico-apiserver(143027e7-a13c-4c0d-bf53-591d4038e751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f747677c9-846mn_calico-apiserver(143027e7-a13c-4c0d-bf53-591d4038e751)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7181afbed9bb8bc37c37bfb2ebd3ec75691206be98a84ef11f130ef4a1208cc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:10:04.102721 containerd[1578]: time="2025-10-30T00:10:04.102653148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-bj47h,Uid:c462b67e-383a-4a79-a697-1a4848277370,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:10:04.219352 containerd[1578]: time="2025-10-30T00:10:04.219272886Z" level=error msg="Failed to destroy network for sandbox \"2db0a0b41ca9a248f3caeea9414a0cf45bf680fde7a3d11bcabd0e50a17a11bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.223902 containerd[1578]: time="2025-10-30T00:10:04.223825824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kvh9,Uid:a484d50b-2eb4-492d-b284-def42903781b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db0a0b41ca9a248f3caeea9414a0cf45bf680fde7a3d11bcabd0e50a17a11bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.224624 kubelet[2876]: E1030 00:10:04.224556 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db0a0b41ca9a248f3caeea9414a0cf45bf680fde7a3d11bcabd0e50a17a11bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.224976 kubelet[2876]: E1030 00:10:04.224900 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db0a0b41ca9a248f3caeea9414a0cf45bf680fde7a3d11bcabd0e50a17a11bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8kvh9" Oct 30 00:10:04.225359 kubelet[2876]: E1030 00:10:04.225108 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db0a0b41ca9a248f3caeea9414a0cf45bf680fde7a3d11bcabd0e50a17a11bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8kvh9" Oct 30 00:10:04.227350 kubelet[2876]: E1030 00:10:04.227259 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8kvh9_kube-system(a484d50b-2eb4-492d-b284-def42903781b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8kvh9_kube-system(a484d50b-2eb4-492d-b284-def42903781b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2db0a0b41ca9a248f3caeea9414a0cf45bf680fde7a3d11bcabd0e50a17a11bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8kvh9" podUID="a484d50b-2eb4-492d-b284-def42903781b" Oct 30 00:10:04.244269 containerd[1578]: time="2025-10-30T00:10:04.243991678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7r8v6,Uid:bcc8b342-9bf9-42ad-8c6d-8b01c298be9a,Namespace:kube-system,Attempt:0,}" Oct 30 00:10:04.308356 containerd[1578]: time="2025-10-30T00:10:04.308243367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:10:04.422993 containerd[1578]: time="2025-10-30T00:10:04.422910873Z" level=error msg="Failed to destroy network for sandbox \"d8be0a5a2625e3c6a457b5382d32a32ba58cbfbaca8176cb3fbed91bc8cc1402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.427606 containerd[1578]: time="2025-10-30T00:10:04.427526405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585ffdbd84-kh6p2,Uid:f10b6c03-ea69-40d6-8304-f2729f28ebe7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8be0a5a2625e3c6a457b5382d32a32ba58cbfbaca8176cb3fbed91bc8cc1402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.428413 kubelet[2876]: E1030 00:10:04.428340 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8be0a5a2625e3c6a457b5382d32a32ba58cbfbaca8176cb3fbed91bc8cc1402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.428724 kubelet[2876]: E1030 00:10:04.428656 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8be0a5a2625e3c6a457b5382d32a32ba58cbfbaca8176cb3fbed91bc8cc1402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" Oct 30 00:10:04.428934 kubelet[2876]: E1030 00:10:04.428701 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8be0a5a2625e3c6a457b5382d32a32ba58cbfbaca8176cb3fbed91bc8cc1402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" Oct 30 00:10:04.429919 kubelet[2876]: E1030 00:10:04.429786 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-585ffdbd84-kh6p2_calico-system(f10b6c03-ea69-40d6-8304-f2729f28ebe7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-585ffdbd84-kh6p2_calico-system(f10b6c03-ea69-40d6-8304-f2729f28ebe7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8be0a5a2625e3c6a457b5382d32a32ba58cbfbaca8176cb3fbed91bc8cc1402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:10:04.445984 containerd[1578]: time="2025-10-30T00:10:04.445787264Z" level=error msg="Failed to destroy network for sandbox \"c662f5fa820db17e5566b167262072598e61f3a7ff8c4e6693381a1bdd5a5315\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.449941 containerd[1578]: time="2025-10-30T00:10:04.449871313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kskmh,Uid:2a97c946-b833-49fb-b0be-330885d32847,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c662f5fa820db17e5566b167262072598e61f3a7ff8c4e6693381a1bdd5a5315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.452835 kubelet[2876]: E1030 00:10:04.452088 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c662f5fa820db17e5566b167262072598e61f3a7ff8c4e6693381a1bdd5a5315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.452835 kubelet[2876]: E1030 00:10:04.452223 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c662f5fa820db17e5566b167262072598e61f3a7ff8c4e6693381a1bdd5a5315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kskmh" Oct 30 00:10:04.452835 kubelet[2876]: E1030 00:10:04.452262 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c662f5fa820db17e5566b167262072598e61f3a7ff8c4e6693381a1bdd5a5315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kskmh" Oct 30 00:10:04.453288 kubelet[2876]: E1030 00:10:04.452452 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kskmh_calico-system(2a97c946-b833-49fb-b0be-330885d32847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kskmh_calico-system(2a97c946-b833-49fb-b0be-330885d32847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c662f5fa820db17e5566b167262072598e61f3a7ff8c4e6693381a1bdd5a5315\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:04.455806 containerd[1578]: time="2025-10-30T00:10:04.455707042Z" level=error msg="Failed to destroy network for sandbox \"8432e009d870ab31bf1bb6b51f6b93a017054ba6a78ae4218552ba98f2871fbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.458851 containerd[1578]: time="2025-10-30T00:10:04.458691514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-bj47h,Uid:c462b67e-383a-4a79-a697-1a4848277370,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8432e009d870ab31bf1bb6b51f6b93a017054ba6a78ae4218552ba98f2871fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.461298 kubelet[2876]: E1030 00:10:04.461249 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8432e009d870ab31bf1bb6b51f6b93a017054ba6a78ae4218552ba98f2871fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.462184 kubelet[2876]: E1030 00:10:04.462057 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8432e009d870ab31bf1bb6b51f6b93a017054ba6a78ae4218552ba98f2871fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" Oct 30 00:10:04.462393 kubelet[2876]: E1030 00:10:04.462144 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8432e009d870ab31bf1bb6b51f6b93a017054ba6a78ae4218552ba98f2871fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" Oct 30 00:10:04.462773 kubelet[2876]: E1030 00:10:04.462563 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f747677c9-bj47h_calico-apiserver(c462b67e-383a-4a79-a697-1a4848277370)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f747677c9-bj47h_calico-apiserver(c462b67e-383a-4a79-a697-1a4848277370)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8432e009d870ab31bf1bb6b51f6b93a017054ba6a78ae4218552ba98f2871fbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:10:04.483882 containerd[1578]: time="2025-10-30T00:10:04.483803668Z" level=error msg="Failed to destroy network for sandbox \"79d4b2bbb25f9e27f1889092e887221978d39664ab0c8aee80017a0970e31e95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.486474 containerd[1578]: time="2025-10-30T00:10:04.486226085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7r8v6,Uid:bcc8b342-9bf9-42ad-8c6d-8b01c298be9a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d4b2bbb25f9e27f1889092e887221978d39664ab0c8aee80017a0970e31e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.486946 kubelet[2876]: E1030 00:10:04.486864 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d4b2bbb25f9e27f1889092e887221978d39664ab0c8aee80017a0970e31e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:10:04.487183 kubelet[2876]: E1030 00:10:04.486951 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d4b2bbb25f9e27f1889092e887221978d39664ab0c8aee80017a0970e31e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7r8v6" Oct 30 00:10:04.487183 kubelet[2876]: E1030 00:10:04.486983 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d4b2bbb25f9e27f1889092e887221978d39664ab0c8aee80017a0970e31e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7r8v6" Oct 30 00:10:04.487575 kubelet[2876]: E1030 00:10:04.487186 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7r8v6_kube-system(bcc8b342-9bf9-42ad-8c6d-8b01c298be9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7r8v6_kube-system(bcc8b342-9bf9-42ad-8c6d-8b01c298be9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79d4b2bbb25f9e27f1889092e887221978d39664ab0c8aee80017a0970e31e95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7r8v6" podUID="bcc8b342-9bf9-42ad-8c6d-8b01c298be9a" Oct 30 00:10:04.724847 systemd[1]: run-netns-cni\x2d7fd8a694\x2d8cab\x2d7793\x2d3b51\x2d4a2333bb1f85.mount: Deactivated successfully. Oct 30 00:10:04.725034 systemd[1]: run-netns-cni\x2d32aa38b8\x2d4434\x2d5642\x2d2976\x2dd13bb59b04b2.mount: Deactivated successfully. Oct 30 00:10:04.725185 systemd[1]: run-netns-cni\x2d5b7e86b5\x2ddea4\x2d8946\x2de26f\x2d15af8db5588c.mount: Deactivated successfully. Oct 30 00:10:13.197049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519102578.mount: Deactivated successfully. Oct 30 00:10:13.239282 containerd[1578]: time="2025-10-30T00:10:13.239185324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:10:13.240994 containerd[1578]: time="2025-10-30T00:10:13.240727399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:10:13.242648 containerd[1578]: time="2025-10-30T00:10:13.242601013Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:10:13.245962 containerd[1578]: time="2025-10-30T00:10:13.245920828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:10:13.247215 containerd[1578]: time="2025-10-30T00:10:13.247004970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.93868552s" Oct 30 00:10:13.247215 containerd[1578]: time="2025-10-30T00:10:13.247098055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:10:13.280911 containerd[1578]: time="2025-10-30T00:10:13.280820227Z" level=info msg="CreateContainer within sandbox \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:10:13.296586 containerd[1578]: time="2025-10-30T00:10:13.296525433Z" level=info msg="Container 38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:10:13.312524 containerd[1578]: time="2025-10-30T00:10:13.312448956Z" level=info msg="CreateContainer within sandbox \"b7f370f293e68573a1c85a7c62bf0c16705912fc8f8faf34044d5dd831fcba7e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\"" Oct 30 00:10:13.314532 containerd[1578]: time="2025-10-30T00:10:13.313300831Z" level=info msg="StartContainer for \"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\"" Oct 30 00:10:13.316717 containerd[1578]: time="2025-10-30T00:10:13.316680218Z" level=info msg="connecting to shim 38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1" address="unix:///run/containerd/s/2a636a2c3ab49aac23191d27b1a43390d0e996eafd258c8586ca50ef45fb1f71" protocol=ttrpc version=3 Oct 30 00:10:13.355307 systemd[1]: Started cri-containerd-38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1.scope - libcontainer container 38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1. Oct 30 00:10:13.439563 containerd[1578]: time="2025-10-30T00:10:13.439515560Z" level=info msg="StartContainer for \"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\" returns successfully" Oct 30 00:10:13.586200 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:10:13.586371 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:10:13.848471 kubelet[2876]: I1030 00:10:13.847479 2876 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvvnp\" (UniqueName: \"kubernetes.io/projected/8d88e52b-8692-40fc-8148-df9b93ac1570-kube-api-access-cvvnp\") pod \"8d88e52b-8692-40fc-8148-df9b93ac1570\" (UID: \"8d88e52b-8692-40fc-8148-df9b93ac1570\") " Oct 30 00:10:13.851855 kubelet[2876]: I1030 00:10:13.850438 2876 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-ca-bundle\") pod \"8d88e52b-8692-40fc-8148-df9b93ac1570\" (UID: \"8d88e52b-8692-40fc-8148-df9b93ac1570\") " Oct 30 00:10:13.851855 kubelet[2876]: I1030 00:10:13.850509 2876 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-backend-key-pair\") pod \"8d88e52b-8692-40fc-8148-df9b93ac1570\" (UID: \"8d88e52b-8692-40fc-8148-df9b93ac1570\") " Oct 30 00:10:13.852902 kubelet[2876]: I1030 00:10:13.852769 2876 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8d88e52b-8692-40fc-8148-df9b93ac1570" (UID: "8d88e52b-8692-40fc-8148-df9b93ac1570"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:10:13.856330 kubelet[2876]: I1030 00:10:13.856260 2876 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d88e52b-8692-40fc-8148-df9b93ac1570-kube-api-access-cvvnp" (OuterVolumeSpecName: "kube-api-access-cvvnp") pod "8d88e52b-8692-40fc-8148-df9b93ac1570" (UID: "8d88e52b-8692-40fc-8148-df9b93ac1570"). InnerVolumeSpecName "kube-api-access-cvvnp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:10:13.861696 kubelet[2876]: I1030 00:10:13.861565 2876 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8d88e52b-8692-40fc-8148-df9b93ac1570" (UID: "8d88e52b-8692-40fc-8148-df9b93ac1570"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:10:13.951923 kubelet[2876]: I1030 00:10:13.951870 2876 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvvnp\" (UniqueName: \"kubernetes.io/projected/8d88e52b-8692-40fc-8148-df9b93ac1570-kube-api-access-cvvnp\") on node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" DevicePath \"\"" Oct 30 00:10:13.952816 kubelet[2876]: I1030 00:10:13.952711 2876 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-ca-bundle\") on node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" DevicePath \"\"" Oct 30 00:10:13.953136 kubelet[2876]: I1030 00:10:13.953072 2876 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d88e52b-8692-40fc-8148-df9b93ac1570-whisker-backend-key-pair\") on node \"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8\" DevicePath \"\"" Oct 30 00:10:14.197496 systemd[1]: var-lib-kubelet-pods-8d88e52b\x2d8692\x2d40fc\x2d8148\x2ddf9b93ac1570-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvvnp.mount: Deactivated successfully. Oct 30 00:10:14.197664 systemd[1]: var-lib-kubelet-pods-8d88e52b\x2d8692\x2d40fc\x2d8148\x2ddf9b93ac1570-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:10:14.364382 systemd[1]: Removed slice kubepods-besteffort-pod8d88e52b_8692_40fc_8148_df9b93ac1570.slice - libcontainer container kubepods-besteffort-pod8d88e52b_8692_40fc_8148_df9b93ac1570.slice. Oct 30 00:10:14.384458 kubelet[2876]: I1030 00:10:14.383717 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bq75t" podStartSLOduration=2.549321372 podStartE2EDuration="22.383691231s" podCreationTimestamp="2025-10-30 00:09:52 +0000 UTC" firstStartedPulling="2025-10-30 00:09:53.414330363 +0000 UTC m=+24.755462670" lastFinishedPulling="2025-10-30 00:10:13.24870023 +0000 UTC m=+44.589832529" observedRunningTime="2025-10-30 00:10:14.382377939 +0000 UTC m=+45.723510261" watchObservedRunningTime="2025-10-30 00:10:14.383691231 +0000 UTC m=+45.724823542" Oct 30 00:10:14.470808 systemd[1]: Created slice kubepods-besteffort-podecf4255f_62f0_4818_8c75_902857f1c600.slice - libcontainer container kubepods-besteffort-podecf4255f_62f0_4818_8c75_902857f1c600.slice. Oct 30 00:10:14.556080 kubelet[2876]: I1030 00:10:14.555997 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdm26\" (UniqueName: \"kubernetes.io/projected/ecf4255f-62f0-4818-8c75-902857f1c600-kube-api-access-xdm26\") pod \"whisker-687d4c5f4-hq5zw\" (UID: \"ecf4255f-62f0-4818-8c75-902857f1c600\") " pod="calico-system/whisker-687d4c5f4-hq5zw" Oct 30 00:10:14.556415 kubelet[2876]: I1030 00:10:14.556381 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecf4255f-62f0-4818-8c75-902857f1c600-whisker-backend-key-pair\") pod \"whisker-687d4c5f4-hq5zw\" (UID: \"ecf4255f-62f0-4818-8c75-902857f1c600\") " pod="calico-system/whisker-687d4c5f4-hq5zw" Oct 30 00:10:14.556558 kubelet[2876]: I1030 00:10:14.556441 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecf4255f-62f0-4818-8c75-902857f1c600-whisker-ca-bundle\") pod \"whisker-687d4c5f4-hq5zw\" (UID: \"ecf4255f-62f0-4818-8c75-902857f1c600\") " pod="calico-system/whisker-687d4c5f4-hq5zw" Oct 30 00:10:14.777665 containerd[1578]: time="2025-10-30T00:10:14.777500859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687d4c5f4-hq5zw,Uid:ecf4255f-62f0-4818-8c75-902857f1c600,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:14.949668 systemd-networkd[1445]: calicf35e3db376: Link UP Oct 30 00:10:14.950673 systemd-networkd[1445]: calicf35e3db376: Gained carrier Oct 30 00:10:14.978841 containerd[1578]: 2025-10-30 00:10:14.819 [INFO][3901] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:10:14.978841 containerd[1578]: 2025-10-30 00:10:14.836 [INFO][3901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0 whisker-687d4c5f4- calico-system ecf4255f-62f0-4818-8c75-902857f1c600 906 0 2025-10-30 00:10:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:687d4c5f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 whisker-687d4c5f4-hq5zw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicf35e3db376 [] [] }} ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-" Oct 30 00:10:14.978841 containerd[1578]: 2025-10-30 00:10:14.837 [INFO][3901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:14.978841 containerd[1578]: 2025-10-30 00:10:14.878 [INFO][3912] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" HandleID="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.878 [INFO][3912] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" HandleID="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"whisker-687d4c5f4-hq5zw", "timestamp":"2025-10-30 00:10:14.878132529 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.878 [INFO][3912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.878 [INFO][3912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.878 [INFO][3912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.891 [INFO][3912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.898 [INFO][3912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.904 [INFO][3912] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979360 containerd[1578]: 2025-10-30 00:10:14.907 [INFO][3912] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.910 [INFO][3912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.910 [INFO][3912] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.913 [INFO][3912] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25 Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.922 [INFO][3912] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.931 [INFO][3912] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.1/26] block=192.168.42.0/26 handle="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.931 [INFO][3912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.1/26] handle="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.931 [INFO][3912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:14.979823 containerd[1578]: 2025-10-30 00:10:14.931 [INFO][3912] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.1/26] IPv6=[] ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" HandleID="k8s-pod-network.ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:14.980289 containerd[1578]: 2025-10-30 00:10:14.935 [INFO][3901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0", GenerateName:"whisker-687d4c5f4-", Namespace:"calico-system", SelfLink:"", UID:"ecf4255f-62f0-4818-8c75-902857f1c600", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"687d4c5f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"whisker-687d4c5f4-hq5zw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf35e3db376", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:14.980456 containerd[1578]: 2025-10-30 00:10:14.936 [INFO][3901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.1/32] ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:14.980456 containerd[1578]: 2025-10-30 00:10:14.936 [INFO][3901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf35e3db376 ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:14.980456 containerd[1578]: 2025-10-30 00:10:14.951 [INFO][3901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:14.980679 containerd[1578]: 2025-10-30 00:10:14.952 [INFO][3901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0", GenerateName:"whisker-687d4c5f4-", Namespace:"calico-system", SelfLink:"", UID:"ecf4255f-62f0-4818-8c75-902857f1c600", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"687d4c5f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25", Pod:"whisker-687d4c5f4-hq5zw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf35e3db376", MAC:"86:28:8e:23:49:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:14.980799 containerd[1578]: 2025-10-30 00:10:14.975 [INFO][3901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" Namespace="calico-system" Pod="whisker-687d4c5f4-hq5zw" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-whisker--687d4c5f4--hq5zw-eth0" Oct 30 00:10:15.018397 containerd[1578]: time="2025-10-30T00:10:15.018269472Z" level=info msg="connecting to shim ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25" address="unix:///run/containerd/s/fe355ed61f49677b4d73e51979a1e98269936fd5c35c1518652525898aa433f1" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:15.056805 containerd[1578]: time="2025-10-30T00:10:15.056545386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kskmh,Uid:2a97c946-b833-49fb-b0be-330885d32847,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:15.057410 systemd[1]: Started cri-containerd-ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25.scope - libcontainer container ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25. Oct 30 00:10:15.062124 kubelet[2876]: I1030 00:10:15.061605 2876 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d88e52b-8692-40fc-8148-df9b93ac1570" path="/var/lib/kubelet/pods/8d88e52b-8692-40fc-8148-df9b93ac1570/volumes" Oct 30 00:10:15.293455 containerd[1578]: time="2025-10-30T00:10:15.293098667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687d4c5f4-hq5zw,Uid:ecf4255f-62f0-4818-8c75-902857f1c600,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee475400b69b41a991cf5137b43af02adeb8c1db2301a8448eb024d9e1271d25\"" Oct 30 00:10:15.300969 containerd[1578]: time="2025-10-30T00:10:15.300915566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:10:15.367443 systemd-networkd[1445]: cali37e9f2c1f21: Link UP Oct 30 00:10:15.368420 systemd-networkd[1445]: cali37e9f2c1f21: Gained carrier Oct 30 00:10:15.400522 containerd[1578]: 2025-10-30 00:10:15.132 [INFO][3958] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:10:15.400522 containerd[1578]: 2025-10-30 00:10:15.160 [INFO][3958] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0 goldmane-666569f655- calico-system 2a97c946-b833-49fb-b0be-330885d32847 837 0 2025-10-30 00:09:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 goldmane-666569f655-kskmh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali37e9f2c1f21 [] [] }} ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-" Oct 30 00:10:15.400522 containerd[1578]: 2025-10-30 00:10:15.161 [INFO][3958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.400522 containerd[1578]: 2025-10-30 00:10:15.271 [INFO][4003] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" HandleID="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.272 [INFO][4003] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" HandleID="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032abd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"goldmane-666569f655-kskmh", "timestamp":"2025-10-30 00:10:15.271333366 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.272 [INFO][4003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.272 [INFO][4003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.272 [INFO][4003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.296 [INFO][4003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.307 [INFO][4003] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.315 [INFO][4003] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.400976 containerd[1578]: 2025-10-30 00:10:15.319 [INFO][4003] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.324 [INFO][4003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.325 [INFO][4003] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.329 [INFO][4003] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21 Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.337 [INFO][4003] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.349 [INFO][4003] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.2/26] block=192.168.42.0/26 handle="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.349 [INFO][4003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.2/26] handle="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.349 [INFO][4003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:15.401995 containerd[1578]: 2025-10-30 00:10:15.349 [INFO][4003] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.2/26] IPv6=[] ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" HandleID="k8s-pod-network.539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.402847 containerd[1578]: 2025-10-30 00:10:15.356 [INFO][3958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2a97c946-b833-49fb-b0be-330885d32847", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"goldmane-666569f655-kskmh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37e9f2c1f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:15.402992 containerd[1578]: 2025-10-30 00:10:15.357 [INFO][3958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.2/32] ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.402992 containerd[1578]: 2025-10-30 00:10:15.358 [INFO][3958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37e9f2c1f21 ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.402992 containerd[1578]: 2025-10-30 00:10:15.367 [INFO][3958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.404266 containerd[1578]: 2025-10-30 00:10:15.367 [INFO][3958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2a97c946-b833-49fb-b0be-330885d32847", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21", Pod:"goldmane-666569f655-kskmh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37e9f2c1f21", MAC:"2a:d3:87:c7:1d:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:15.404421 containerd[1578]: 2025-10-30 00:10:15.395 [INFO][3958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" Namespace="calico-system" Pod="goldmane-666569f655-kskmh" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-goldmane--666569f655--kskmh-eth0" Oct 30 00:10:15.455810 containerd[1578]: time="2025-10-30T00:10:15.455655065Z" level=info msg="connecting to shim 539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21" address="unix:///run/containerd/s/e9c79f63b2fb56219e12b89bc1d1dc285c60a695553d249a63126f77c74a57db" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:15.493942 containerd[1578]: time="2025-10-30T00:10:15.493517708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:15.498508 containerd[1578]: time="2025-10-30T00:10:15.498443006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:10:15.498717 containerd[1578]: time="2025-10-30T00:10:15.498475102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:10:15.499638 kubelet[2876]: E1030 00:10:15.499575 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:10:15.500420 kubelet[2876]: E1030 00:10:15.500201 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:10:15.500783 kubelet[2876]: E1030 00:10:15.500705 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e3ec70ed95d466a96fa71158ea120e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-687d4c5f4-hq5zw_calico-system(ecf4255f-62f0-4818-8c75-902857f1c600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:15.506927 containerd[1578]: time="2025-10-30T00:10:15.506511434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:10:15.550616 systemd[1]: Started cri-containerd-539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21.scope - libcontainer container 539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21. Oct 30 00:10:15.680332 containerd[1578]: time="2025-10-30T00:10:15.680090719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:15.681867 containerd[1578]: time="2025-10-30T00:10:15.681794733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:10:15.682250 containerd[1578]: time="2025-10-30T00:10:15.682077855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:10:15.682769 kubelet[2876]: E1030 00:10:15.682686 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:10:15.682895 kubelet[2876]: E1030 00:10:15.682788 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:10:15.683493 kubelet[2876]: E1030 00:10:15.683395 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-687d4c5f4-hq5zw_calico-system(ecf4255f-62f0-4818-8c75-902857f1c600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:15.685286 kubelet[2876]: E1030 00:10:15.685207 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:10:15.737731 containerd[1578]: time="2025-10-30T00:10:15.737666746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kskmh,Uid:2a97c946-b833-49fb-b0be-330885d32847,Namespace:calico-system,Attempt:0,} returns sandbox id \"539d9343d7c6c9c1371cdc81ab933b63ab20d6a577ff836189c95c974384ad21\"" Oct 30 00:10:15.741980 containerd[1578]: time="2025-10-30T00:10:15.741234300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:10:16.056666 containerd[1578]: time="2025-10-30T00:10:16.056610679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kvh9,Uid:a484d50b-2eb4-492d-b284-def42903781b,Namespace:kube-system,Attempt:0,}" Oct 30 00:10:16.059183 containerd[1578]: time="2025-10-30T00:10:16.058106970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585ffdbd84-kh6p2,Uid:f10b6c03-ea69-40d6-8304-f2729f28ebe7,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:16.093446 containerd[1578]: time="2025-10-30T00:10:16.092660181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:16.097919 containerd[1578]: time="2025-10-30T00:10:16.097854481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:10:16.100043 kubelet[2876]: E1030 00:10:16.098763 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:10:16.102219 kubelet[2876]: E1030 00:10:16.100750 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:10:16.102388 containerd[1578]: time="2025-10-30T00:10:16.102048507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:16.102930 kubelet[2876]: E1030 00:10:16.102596 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhzdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kskmh_calico-system(2a97c946-b833-49fb-b0be-330885d32847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:16.104513 kubelet[2876]: E1030 00:10:16.104376 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:16.394148 kubelet[2876]: E1030 00:10:16.388985 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:16.407171 kubelet[2876]: E1030 00:10:16.406622 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:10:16.472548 systemd-networkd[1445]: calicf35e3db376: Gained IPv6LL Oct 30 00:10:16.630971 systemd-networkd[1445]: calib4a970da224: Link UP Oct 30 00:10:16.632857 systemd-networkd[1445]: calib4a970da224: Gained carrier Oct 30 00:10:16.690818 containerd[1578]: 2025-10-30 00:10:16.211 [INFO][4153] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:10:16.690818 containerd[1578]: 2025-10-30 00:10:16.250 [INFO][4153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0 calico-kube-controllers-585ffdbd84- calico-system f10b6c03-ea69-40d6-8304-f2729f28ebe7 836 0 2025-10-30 00:09:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:585ffdbd84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 calico-kube-controllers-585ffdbd84-kh6p2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4a970da224 [] [] }} ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-" Oct 30 00:10:16.690818 containerd[1578]: 2025-10-30 00:10:16.251 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.691202 containerd[1578]: 2025-10-30 00:10:16.332 [INFO][4185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" HandleID="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.691202 containerd[1578]: 2025-10-30 00:10:16.332 [INFO][4185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" HandleID="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"calico-kube-controllers-585ffdbd84-kh6p2", "timestamp":"2025-10-30 00:10:16.332069747 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:16.691202 containerd[1578]: 2025-10-30 00:10:16.333 [INFO][4185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:16.691202 containerd[1578]: 2025-10-30 00:10:16.333 [INFO][4185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:16.691202 containerd[1578]: 2025-10-30 00:10:16.333 [INFO][4185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:16.691202 containerd[1578]: 2025-10-30 00:10:16.359 [INFO][4185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.406 [INFO][4185] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.439 [INFO][4185] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.458 [INFO][4185] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.490 [INFO][4185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.493 [INFO][4185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.514 [INFO][4185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250 Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.569 [INFO][4185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.618 [INFO][4185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.3/26] block=192.168.42.0/26 handle="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.618 [INFO][4185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.3/26] handle="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.691518 containerd[1578]: 2025-10-30 00:10:16.619 [INFO][4185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:16.691973 containerd[1578]: 2025-10-30 00:10:16.620 [INFO][4185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.3/26] IPv6=[] ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" HandleID="k8s-pod-network.06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.696102 containerd[1578]: 2025-10-30 00:10:16.623 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0", GenerateName:"calico-kube-controllers-585ffdbd84-", Namespace:"calico-system", SelfLink:"", UID:"f10b6c03-ea69-40d6-8304-f2729f28ebe7", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"585ffdbd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"calico-kube-controllers-585ffdbd84-kh6p2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4a970da224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:16.696102 containerd[1578]: 2025-10-30 00:10:16.623 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.3/32] ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.696102 containerd[1578]: 2025-10-30 00:10:16.624 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4a970da224 ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.696102 containerd[1578]: 2025-10-30 00:10:16.635 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.696102 containerd[1578]: 2025-10-30 00:10:16.636 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0", GenerateName:"calico-kube-controllers-585ffdbd84-", Namespace:"calico-system", SelfLink:"", UID:"f10b6c03-ea69-40d6-8304-f2729f28ebe7", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"585ffdbd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250", Pod:"calico-kube-controllers-585ffdbd84-kh6p2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4a970da224", MAC:"de:82:65:df:34:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:16.696587 containerd[1578]: 2025-10-30 00:10:16.680 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" Namespace="calico-system" Pod="calico-kube-controllers-585ffdbd84-kh6p2" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--kube--controllers--585ffdbd84--kh6p2-eth0" Oct 30 00:10:16.755331 containerd[1578]: time="2025-10-30T00:10:16.755275109Z" level=info msg="connecting to shim 06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250" address="unix:///run/containerd/s/23086a6ccd6f32ea63652c9899c538ca818f4e5b41fb7c162d7b199d7c0ce2e4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:16.820237 systemd-networkd[1445]: calif91c0e9bb68: Link UP Oct 30 00:10:16.820664 systemd-networkd[1445]: calif91c0e9bb68: Gained carrier Oct 30 00:10:16.843520 systemd[1]: Started cri-containerd-06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250.scope - libcontainer container 06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250. Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.203 [INFO][4152] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.238 [INFO][4152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0 coredns-674b8bbfcf- kube-system a484d50b-2eb4-492d-b284-def42903781b 835 0 2025-10-30 00:09:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 coredns-674b8bbfcf-8kvh9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif91c0e9bb68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.238 [INFO][4152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.387 [INFO][4184] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" HandleID="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.389 [INFO][4184] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" HandleID="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000372e30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"coredns-674b8bbfcf-8kvh9", "timestamp":"2025-10-30 00:10:16.387828234 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.389 [INFO][4184] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.619 [INFO][4184] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.619 [INFO][4184] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.675 [INFO][4184] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.695 [INFO][4184] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.719 [INFO][4184] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.723 [INFO][4184] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.736 [INFO][4184] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.737 [INFO][4184] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.742 [INFO][4184] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416 Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.759 [INFO][4184] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.801 [INFO][4184] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.4/26] block=192.168.42.0/26 handle="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.870733 containerd[1578]: 2025-10-30 00:10:16.802 [INFO][4184] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.4/26] handle="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:16.875532 containerd[1578]: 2025-10-30 00:10:16.802 [INFO][4184] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:16.875532 containerd[1578]: 2025-10-30 00:10:16.802 [INFO][4184] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.4/26] IPv6=[] ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" HandleID="k8s-pod-network.c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.875532 containerd[1578]: 2025-10-30 00:10:16.811 [INFO][4152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a484d50b-2eb4-492d-b284-def42903781b", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"coredns-674b8bbfcf-8kvh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif91c0e9bb68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:16.875532 containerd[1578]: 2025-10-30 00:10:16.811 [INFO][4152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.4/32] ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.875532 containerd[1578]: 2025-10-30 00:10:16.811 [INFO][4152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif91c0e9bb68 ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.875532 containerd[1578]: 2025-10-30 00:10:16.822 [INFO][4152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.876688 containerd[1578]: 2025-10-30 00:10:16.829 [INFO][4152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a484d50b-2eb4-492d-b284-def42903781b", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416", Pod:"coredns-674b8bbfcf-8kvh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif91c0e9bb68", MAC:"96:94:39:ad:64:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:16.876688 containerd[1578]: 2025-10-30 00:10:16.864 [INFO][4152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kvh9" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--8kvh9-eth0" Oct 30 00:10:16.934882 containerd[1578]: time="2025-10-30T00:10:16.934809199Z" level=info msg="connecting to shim c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416" address="unix:///run/containerd/s/bd223c96b2a676ab77e868745f518301f9a9dbe576811d00b4ddd3363d9dd533" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:17.011950 systemd[1]: Started cri-containerd-c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416.scope - libcontainer container c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416. Oct 30 00:10:17.058741 containerd[1578]: time="2025-10-30T00:10:17.058684633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-bj47h,Uid:c462b67e-383a-4a79-a697-1a4848277370,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:10:17.061158 containerd[1578]: time="2025-10-30T00:10:17.059986868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7r8v6,Uid:bcc8b342-9bf9-42ad-8c6d-8b01c298be9a,Namespace:kube-system,Attempt:0,}" Oct 30 00:10:17.240763 systemd-networkd[1445]: cali37e9f2c1f21: Gained IPv6LL Oct 30 00:10:17.258631 containerd[1578]: time="2025-10-30T00:10:17.258553011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kvh9,Uid:a484d50b-2eb4-492d-b284-def42903781b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416\"" Oct 30 00:10:17.268828 containerd[1578]: time="2025-10-30T00:10:17.268297430Z" level=info msg="CreateContainer within sandbox \"c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:10:17.288463 containerd[1578]: time="2025-10-30T00:10:17.288411652Z" level=info msg="Container fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:10:17.299248 containerd[1578]: time="2025-10-30T00:10:17.299196139Z" level=info msg="CreateContainer within sandbox \"c610ca244ddf061f60640524018a0d7bf05b9faa61ce77c40f15cdd29a526416\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193\"" Oct 30 00:10:17.302528 containerd[1578]: time="2025-10-30T00:10:17.302468109Z" level=info msg="StartContainer for \"fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193\"" Oct 30 00:10:17.317180 containerd[1578]: time="2025-10-30T00:10:17.316730391Z" level=info msg="connecting to shim fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193" address="unix:///run/containerd/s/bd223c96b2a676ab77e868745f518301f9a9dbe576811d00b4ddd3363d9dd533" protocol=ttrpc version=3 Oct 30 00:10:17.370597 containerd[1578]: time="2025-10-30T00:10:17.370545342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585ffdbd84-kh6p2,Uid:f10b6c03-ea69-40d6-8304-f2729f28ebe7,Namespace:calico-system,Attempt:0,} returns sandbox id \"06f3f71cec440f44ab81bb63898b0e329966d9a368bdc217d716f7626c5de250\"" Oct 30 00:10:17.387496 containerd[1578]: time="2025-10-30T00:10:17.385517045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:10:17.396305 systemd[1]: Started cri-containerd-fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193.scope - libcontainer container fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193. Oct 30 00:10:17.414631 kubelet[2876]: E1030 00:10:17.414470 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:17.555423 containerd[1578]: time="2025-10-30T00:10:17.553593045Z" level=info msg="StartContainer for \"fdb09e890417cc06fb8815ac7e260d534c11c389cac7890e7f30df023b3f7193\" returns successfully" Oct 30 00:10:17.595194 systemd-networkd[1445]: cali61f592e5539: Link UP Oct 30 00:10:17.600598 containerd[1578]: time="2025-10-30T00:10:17.599667082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:17.599710 systemd-networkd[1445]: cali61f592e5539: Gained carrier Oct 30 00:10:17.606977 containerd[1578]: time="2025-10-30T00:10:17.606865710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:10:17.607463 containerd[1578]: time="2025-10-30T00:10:17.606881336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:10:17.616064 kubelet[2876]: E1030 00:10:17.615279 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:10:17.616064 kubelet[2876]: E1030 00:10:17.615843 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:10:17.620669 kubelet[2876]: E1030 00:10:17.620319 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8vgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-585ffdbd84-kh6p2_calico-system(f10b6c03-ea69-40d6-8304-f2729f28ebe7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:17.621887 kubelet[2876]: E1030 00:10:17.621703 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.245 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0 calico-apiserver-f747677c9- calico-apiserver c462b67e-383a-4a79-a697-1a4848277370 838 0 2025-10-30 00:09:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f747677c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 calico-apiserver-f747677c9-bj47h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61f592e5539 [] [] }} ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.249 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.427 [INFO][4342] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" HandleID="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.428 [INFO][4342] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" HandleID="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acdb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"calico-apiserver-f747677c9-bj47h", "timestamp":"2025-10-30 00:10:17.427348042 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.429 [INFO][4342] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.430 [INFO][4342] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.430 [INFO][4342] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.466 [INFO][4342] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.482 [INFO][4342] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.510 [INFO][4342] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.518 [INFO][4342] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.525 [INFO][4342] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.525 [INFO][4342] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.531 [INFO][4342] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21 Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.555 [INFO][4342] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.572 [INFO][4342] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.5/26] block=192.168.42.0/26 handle="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.650318 containerd[1578]: 2025-10-30 00:10:17.572 [INFO][4342] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.5/26] handle="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.651489 containerd[1578]: 2025-10-30 00:10:17.572 [INFO][4342] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:17.651489 containerd[1578]: 2025-10-30 00:10:17.573 [INFO][4342] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.5/26] IPv6=[] ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" HandleID="k8s-pod-network.9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.651489 containerd[1578]: 2025-10-30 00:10:17.581 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0", GenerateName:"calico-apiserver-f747677c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c462b67e-383a-4a79-a697-1a4848277370", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f747677c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"calico-apiserver-f747677c9-bj47h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61f592e5539", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:17.651489 containerd[1578]: 2025-10-30 00:10:17.583 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.5/32] ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.651489 containerd[1578]: 2025-10-30 00:10:17.583 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61f592e5539 ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.651489 containerd[1578]: 2025-10-30 00:10:17.601 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.653051 containerd[1578]: 2025-10-30 00:10:17.605 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0", GenerateName:"calico-apiserver-f747677c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c462b67e-383a-4a79-a697-1a4848277370", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f747677c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21", Pod:"calico-apiserver-f747677c9-bj47h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61f592e5539", MAC:"f6:24:ea:52:c8:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:17.653051 containerd[1578]: 2025-10-30 00:10:17.645 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-bj47h" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--bj47h-eth0" Oct 30 00:10:17.702456 containerd[1578]: time="2025-10-30T00:10:17.702172908Z" level=info msg="connecting to shim 9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21" address="unix:///run/containerd/s/4d9ef9a5c603034b4a9b3128c6c03afbc19d694a1d84cdbce3824ef5102f6926" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:17.778288 systemd[1]: Started cri-containerd-9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21.scope - libcontainer container 9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21. Oct 30 00:10:17.792230 systemd-networkd[1445]: cali85c78034bdc: Link UP Oct 30 00:10:17.796065 systemd-networkd[1445]: cali85c78034bdc: Gained carrier Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.338 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0 coredns-674b8bbfcf- kube-system bcc8b342-9bf9-42ad-8c6d-8b01c298be9a 834 0 2025-10-30 00:09:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 coredns-674b8bbfcf-7r8v6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85c78034bdc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.341 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.491 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" HandleID="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.492 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" HandleID="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00020b1b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"coredns-674b8bbfcf-7r8v6", "timestamp":"2025-10-30 00:10:17.491915201 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.492 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.572 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.573 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.608 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.636 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.661 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.665 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.674 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.674 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.680 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681 Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.691 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.765 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.6/26] block=192.168.42.0/26 handle="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.765 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.6/26] handle="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:17.825461 containerd[1578]: 2025-10-30 00:10:17.767 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:17.826834 containerd[1578]: 2025-10-30 00:10:17.767 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.6/26] IPv6=[] ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" HandleID="k8s-pod-network.4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.826834 containerd[1578]: 2025-10-30 00:10:17.782 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bcc8b342-9bf9-42ad-8c6d-8b01c298be9a", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"coredns-674b8bbfcf-7r8v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85c78034bdc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:17.826834 containerd[1578]: 2025-10-30 00:10:17.784 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.6/32] ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.826834 containerd[1578]: 2025-10-30 00:10:17.784 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85c78034bdc ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.826834 containerd[1578]: 2025-10-30 00:10:17.791 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.834034 containerd[1578]: 2025-10-30 00:10:17.797 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bcc8b342-9bf9-42ad-8c6d-8b01c298be9a", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681", Pod:"coredns-674b8bbfcf-7r8v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85c78034bdc", MAC:"d6:a3:f8:a5:a1:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:17.834034 containerd[1578]: 2025-10-30 00:10:17.815 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" Namespace="kube-system" Pod="coredns-674b8bbfcf-7r8v6" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-coredns--674b8bbfcf--7r8v6-eth0" Oct 30 00:10:17.890064 containerd[1578]: time="2025-10-30T00:10:17.888279582Z" level=info msg="connecting to shim 4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681" address="unix:///run/containerd/s/0f5a5f0f037f3425004e594ce940714f5581c978cec33054054a9f7718119a24" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:17.970294 systemd[1]: Started cri-containerd-4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681.scope - libcontainer container 4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681. Oct 30 00:10:18.083793 containerd[1578]: time="2025-10-30T00:10:18.083622998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7r8v6,Uid:bcc8b342-9bf9-42ad-8c6d-8b01c298be9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681\"" Oct 30 00:10:18.092962 containerd[1578]: time="2025-10-30T00:10:18.092864525Z" level=info msg="CreateContainer within sandbox \"4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:10:18.119066 containerd[1578]: time="2025-10-30T00:10:18.116631619Z" level=info msg="Container 203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:10:18.130304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678672397.mount: Deactivated successfully. Oct 30 00:10:18.144062 containerd[1578]: time="2025-10-30T00:10:18.143963681Z" level=info msg="CreateContainer within sandbox \"4d00c7b21dd67c218a3a5549a913ef5fa2ec663edb01a3d9ae62eefdacf50681\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c\"" Oct 30 00:10:18.146399 containerd[1578]: time="2025-10-30T00:10:18.146350129Z" level=info msg="StartContainer for \"203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c\"" Oct 30 00:10:18.151661 containerd[1578]: time="2025-10-30T00:10:18.151496317Z" level=info msg="connecting to shim 203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c" address="unix:///run/containerd/s/0f5a5f0f037f3425004e594ce940714f5581c978cec33054054a9f7718119a24" protocol=ttrpc version=3 Oct 30 00:10:18.192074 systemd[1]: Started cri-containerd-203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c.scope - libcontainer container 203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c. Oct 30 00:10:18.261753 containerd[1578]: time="2025-10-30T00:10:18.261631099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-bj47h,Uid:c462b67e-383a-4a79-a697-1a4848277370,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9da9c9d94991e5cf9237655e47bcbdf8970979b4d22e71bd35ef1b366b4aff21\"" Oct 30 00:10:18.274116 containerd[1578]: time="2025-10-30T00:10:18.273935177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:10:18.322910 containerd[1578]: time="2025-10-30T00:10:18.322847747Z" level=info msg="StartContainer for \"203c110909278b1ee607c689f84c08c6c42b5bda30a2006a8c3282840181ac6c\" returns successfully" Oct 30 00:10:18.441111 kubelet[2876]: E1030 00:10:18.440625 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:10:18.455772 containerd[1578]: time="2025-10-30T00:10:18.455533494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:18.457223 containerd[1578]: time="2025-10-30T00:10:18.457104656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:10:18.457505 containerd[1578]: time="2025-10-30T00:10:18.457126529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:18.457923 kubelet[2876]: E1030 00:10:18.457877 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:18.458118 kubelet[2876]: E1030 00:10:18.458092 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:18.458606 kubelet[2876]: E1030 00:10:18.458523 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gddmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f747677c9-bj47h_calico-apiserver(c462b67e-383a-4a79-a697-1a4848277370): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:18.460087 kubelet[2876]: E1030 00:10:18.460039 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:10:18.484092 kubelet[2876]: I1030 00:10:18.483947 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7r8v6" podStartSLOduration=44.483923761 podStartE2EDuration="44.483923761s" podCreationTimestamp="2025-10-30 00:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:10:18.483423002 +0000 UTC m=+49.824555316" watchObservedRunningTime="2025-10-30 00:10:18.483923761 +0000 UTC m=+49.825056076" Oct 30 00:10:18.586060 kubelet[2876]: I1030 00:10:18.585537 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8kvh9" podStartSLOduration=44.58551203 podStartE2EDuration="44.58551203s" podCreationTimestamp="2025-10-30 00:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:10:18.585086586 +0000 UTC m=+49.926218899" watchObservedRunningTime="2025-10-30 00:10:18.58551203 +0000 UTC m=+49.926644342" Oct 30 00:10:18.648504 systemd-networkd[1445]: calib4a970da224: Gained IPv6LL Oct 30 00:10:18.716468 systemd-networkd[1445]: vxlan.calico: Link UP Oct 30 00:10:18.716482 systemd-networkd[1445]: vxlan.calico: Gained carrier Oct 30 00:10:18.776512 systemd-networkd[1445]: calif91c0e9bb68: Gained IPv6LL Oct 30 00:10:19.059108 containerd[1578]: time="2025-10-30T00:10:19.058917336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vsw6q,Uid:c0987c36-521e-441e-a4df-01b4de4064f7,Namespace:calico-system,Attempt:0,}" Oct 30 00:10:19.060625 containerd[1578]: time="2025-10-30T00:10:19.060566646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-846mn,Uid:143027e7-a13c-4c0d-bf53-591d4038e751,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:10:19.288597 systemd-networkd[1445]: cali61f592e5539: Gained IPv6LL Oct 30 00:10:19.398716 systemd-networkd[1445]: cali993d3541800: Link UP Oct 30 00:10:19.400788 systemd-networkd[1445]: cali993d3541800: Gained carrier Oct 30 00:10:19.451304 kubelet[2876]: E1030 00:10:19.451171 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.221 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0 calico-apiserver-f747677c9- calico-apiserver 143027e7-a13c-4c0d-bf53-591d4038e751 830 0 2025-10-30 00:09:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f747677c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 calico-apiserver-f747677c9-846mn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali993d3541800 [] [] }} ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.222 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.305 [INFO][4589] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" HandleID="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.306 [INFO][4589] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" HandleID="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"calico-apiserver-f747677c9-846mn", "timestamp":"2025-10-30 00:10:19.305724444 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.307 [INFO][4589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.307 [INFO][4589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.307 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.320 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.328 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.339 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.343 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.349 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.349 [INFO][4589] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.359 [INFO][4589] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65 Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.367 [INFO][4589] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.378 [INFO][4589] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.7/26] block=192.168.42.0/26 handle="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.461644 containerd[1578]: 2025-10-30 00:10:19.379 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.7/26] handle="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.463741 containerd[1578]: 2025-10-30 00:10:19.379 [INFO][4589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:19.463741 containerd[1578]: 2025-10-30 00:10:19.380 [INFO][4589] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.7/26] IPv6=[] ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" HandleID="k8s-pod-network.a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.463741 containerd[1578]: 2025-10-30 00:10:19.386 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0", GenerateName:"calico-apiserver-f747677c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"143027e7-a13c-4c0d-bf53-591d4038e751", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f747677c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"calico-apiserver-f747677c9-846mn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali993d3541800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:19.463741 containerd[1578]: 2025-10-30 00:10:19.386 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.7/32] ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.463741 containerd[1578]: 2025-10-30 00:10:19.386 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali993d3541800 ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.463741 containerd[1578]: 2025-10-30 00:10:19.404 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.465217 containerd[1578]: 2025-10-30 00:10:19.406 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0", GenerateName:"calico-apiserver-f747677c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"143027e7-a13c-4c0d-bf53-591d4038e751", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f747677c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65", Pod:"calico-apiserver-f747677c9-846mn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali993d3541800", MAC:"1a:49:fc:95:a9:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:19.465217 containerd[1578]: 2025-10-30 00:10:19.451 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" Namespace="calico-apiserver" Pod="calico-apiserver-f747677c9-846mn" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-calico--apiserver--f747677c9--846mn-eth0" Oct 30 00:10:19.594382 containerd[1578]: time="2025-10-30T00:10:19.594320079Z" level=info msg="connecting to shim a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65" address="unix:///run/containerd/s/64e51b03ba277c63a75f79c00ce010270cfc717ce18336cd9865ae818eeacfdb" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:19.653398 systemd-networkd[1445]: cali46527681659: Link UP Oct 30 00:10:19.657276 systemd-networkd[1445]: cali46527681659: Gained carrier Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.217 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0 csi-node-driver- calico-system c0987c36-521e-441e-a4df-01b4de4064f7 725 0 2025-10-30 00:09:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8 csi-node-driver-vsw6q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali46527681659 [] [] }} ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.217 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.309 [INFO][4584] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" HandleID="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.312 [INFO][4584] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" HandleID="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125c00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", "pod":"csi-node-driver-vsw6q", "timestamp":"2025-10-30 00:10:19.309381291 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.312 [INFO][4584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.379 [INFO][4584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.380 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8' Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.428 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.447 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.479 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.487 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.504 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.506 [INFO][4584] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.529 [INFO][4584] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.557 [INFO][4584] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.605 [INFO][4584] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.8/26] block=192.168.42.0/26 handle="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.605 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.8/26] handle="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" host="ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8" Oct 30 00:10:19.710243 containerd[1578]: 2025-10-30 00:10:19.605 [INFO][4584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:10:19.712744 containerd[1578]: 2025-10-30 00:10:19.605 [INFO][4584] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.8/26] IPv6=[] ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" HandleID="k8s-pod-network.688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Workload="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.712744 containerd[1578]: 2025-10-30 00:10:19.626 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0987c36-521e-441e-a4df-01b4de4064f7", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"", Pod:"csi-node-driver-vsw6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46527681659", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:19.712744 containerd[1578]: 2025-10-30 00:10:19.628 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.8/32] ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.712744 containerd[1578]: 2025-10-30 00:10:19.631 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46527681659 ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.712744 containerd[1578]: 2025-10-30 00:10:19.657 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.715300 containerd[1578]: 2025-10-30 00:10:19.662 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0987c36-521e-441e-a4df-01b4de4064f7", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251029-2100-1a037aaa6ecc488138a8", ContainerID:"688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a", Pod:"csi-node-driver-vsw6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46527681659", MAC:"32:d4:6d:14:d5:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:10:19.715300 containerd[1578]: 2025-10-30 00:10:19.698 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" Namespace="calico-system" Pod="csi-node-driver-vsw6q" WorkloadEndpoint="ci--4459--1--0--nightly--20251029--2100--1a037aaa6ecc488138a8-k8s-csi--node--driver--vsw6q-eth0" Oct 30 00:10:19.735809 systemd[1]: Started cri-containerd-a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65.scope - libcontainer container a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65. Oct 30 00:10:19.790330 containerd[1578]: time="2025-10-30T00:10:19.790272341Z" level=info msg="connecting to shim 688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a" address="unix:///run/containerd/s/b1086a7868a11dc2a5cd6e93b3df75fc27b98be0cf56bd479c953c0b8b2d5057" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:10:19.800243 systemd-networkd[1445]: cali85c78034bdc: Gained IPv6LL Oct 30 00:10:19.858334 systemd[1]: Started cri-containerd-688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a.scope - libcontainer container 688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a. Oct 30 00:10:19.944447 containerd[1578]: time="2025-10-30T00:10:19.944300382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vsw6q,Uid:c0987c36-521e-441e-a4df-01b4de4064f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"688a7f0370a6aacafc2f770ef0c3ce70aee1c5927ce7b0f5d59bf39141a1c44a\"" Oct 30 00:10:19.953755 containerd[1578]: time="2025-10-30T00:10:19.953330894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:10:20.014348 containerd[1578]: time="2025-10-30T00:10:20.014263880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f747677c9-846mn,Uid:143027e7-a13c-4c0d-bf53-591d4038e751,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a0e9e65911cbbe184dbfa1df454ec4b7ab62bf4d6eef6df84bed8307c0b6de65\"" Oct 30 00:10:20.184005 containerd[1578]: time="2025-10-30T00:10:20.183951495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:20.186146 containerd[1578]: time="2025-10-30T00:10:20.185947329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:10:20.186375 containerd[1578]: time="2025-10-30T00:10:20.185951513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:10:20.187167 kubelet[2876]: E1030 00:10:20.186983 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:10:20.187167 kubelet[2876]: E1030 00:10:20.187136 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:10:20.188368 kubelet[2876]: E1030 00:10:20.188296 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:20.189787 containerd[1578]: time="2025-10-30T00:10:20.189261500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:10:20.248390 systemd-networkd[1445]: vxlan.calico: Gained IPv6LL Oct 30 00:10:20.349360 kubelet[2876]: I1030 00:10:20.349283 2876 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:10:20.357405 containerd[1578]: time="2025-10-30T00:10:20.357318100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:20.359246 containerd[1578]: time="2025-10-30T00:10:20.359071034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:10:20.359246 containerd[1578]: time="2025-10-30T00:10:20.359194305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:20.359944 kubelet[2876]: E1030 00:10:20.359894 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:20.360373 kubelet[2876]: E1030 00:10:20.360175 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:20.360582 kubelet[2876]: E1030 00:10:20.360489 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpn7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f747677c9-846mn_calico-apiserver(143027e7-a13c-4c0d-bf53-591d4038e751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:20.361778 containerd[1578]: time="2025-10-30T00:10:20.361743313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:10:20.362437 kubelet[2876]: E1030 00:10:20.362366 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:10:20.448879 kubelet[2876]: E1030 00:10:20.448812 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:10:20.524801 containerd[1578]: time="2025-10-30T00:10:20.524505962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\" id:\"8ebb4a3d591eb524b4109b459d8c7280eb62650b078db47af074a71e0296b5c0\" pid:4761 exited_at:{seconds:1761783020 nanos:523842399}" Oct 30 00:10:20.526166 containerd[1578]: time="2025-10-30T00:10:20.525480217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:20.527630 containerd[1578]: time="2025-10-30T00:10:20.527577396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:10:20.528337 containerd[1578]: time="2025-10-30T00:10:20.527723941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:10:20.530364 kubelet[2876]: E1030 00:10:20.529977 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:10:20.531598 kubelet[2876]: E1030 00:10:20.531542 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:10:20.532107 kubelet[2876]: E1030 00:10:20.531777 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:20.533240 kubelet[2876]: E1030 00:10:20.533187 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:20.569234 systemd-networkd[1445]: cali993d3541800: Gained IPv6LL Oct 30 00:10:20.701451 containerd[1578]: time="2025-10-30T00:10:20.701367384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\" id:\"3e7897a3949a0b62c3bef9a19dea21907ed7400f5b213cd5ddf2b67f0a8ee231\" pid:4785 exited_at:{seconds:1761783020 nanos:700532858}" Oct 30 00:10:21.400338 systemd-networkd[1445]: cali46527681659: Gained IPv6LL Oct 30 00:10:21.451746 kubelet[2876]: E1030 00:10:21.451670 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:10:21.452989 kubelet[2876]: E1030 00:10:21.452842 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:23.903677 ntpd[1679]: Listen normally on 6 vxlan.calico 192.168.42.0:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 6 vxlan.calico 192.168.42.0:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 7 calicf35e3db376 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 8 cali37e9f2c1f21 [fe80::ecee:eeff:feee:eeee%5]:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 9 calib4a970da224 [fe80::ecee:eeff:feee:eeee%6]:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 10 calif91c0e9bb68 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 11 cali61f592e5539 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 30 00:10:23.904721 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 12 cali85c78034bdc [fe80::ecee:eeff:feee:eeee%9]:123 Oct 30 00:10:23.903770 ntpd[1679]: Listen normally on 7 calicf35e3db376 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 30 00:10:23.903816 ntpd[1679]: Listen normally on 8 cali37e9f2c1f21 [fe80::ecee:eeff:feee:eeee%5]:123 Oct 30 00:10:23.903858 ntpd[1679]: Listen normally on 9 calib4a970da224 [fe80::ecee:eeff:feee:eeee%6]:123 Oct 30 00:10:23.903899 ntpd[1679]: Listen normally on 10 calif91c0e9bb68 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 30 00:10:23.903940 ntpd[1679]: Listen normally on 11 cali61f592e5539 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 30 00:10:23.903981 ntpd[1679]: Listen normally on 12 cali85c78034bdc [fe80::ecee:eeff:feee:eeee%9]:123 Oct 30 00:10:23.905448 ntpd[1679]: Listen normally on 13 vxlan.calico [fe80::646c:62ff:fe63:b0bf%10]:123 Oct 30 00:10:23.905751 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 13 vxlan.calico [fe80::646c:62ff:fe63:b0bf%10]:123 Oct 30 00:10:23.905751 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 14 cali993d3541800 [fe80::ecee:eeff:feee:eeee%13]:123 Oct 30 00:10:23.905751 ntpd[1679]: 30 Oct 00:10:23 ntpd[1679]: Listen normally on 15 cali46527681659 [fe80::ecee:eeff:feee:eeee%14]:123 Oct 30 00:10:23.905530 ntpd[1679]: Listen normally on 14 cali993d3541800 [fe80::ecee:eeff:feee:eeee%13]:123 Oct 30 00:10:23.905573 ntpd[1679]: Listen normally on 15 cali46527681659 [fe80::ecee:eeff:feee:eeee%14]:123 Oct 30 00:10:27.073043 containerd[1578]: time="2025-10-30T00:10:27.072687048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:10:27.258513 containerd[1578]: time="2025-10-30T00:10:27.258419953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:27.260725 containerd[1578]: time="2025-10-30T00:10:27.260647132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:10:27.260930 containerd[1578]: time="2025-10-30T00:10:27.260782735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:10:27.261177 kubelet[2876]: E1030 00:10:27.261116 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:10:27.261804 kubelet[2876]: E1030 00:10:27.261197 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:10:27.261804 kubelet[2876]: E1030 00:10:27.261416 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e3ec70ed95d466a96fa71158ea120e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-687d4c5f4-hq5zw_calico-system(ecf4255f-62f0-4818-8c75-902857f1c600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:27.264523 containerd[1578]: time="2025-10-30T00:10:27.264476659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:10:27.429960 containerd[1578]: time="2025-10-30T00:10:27.429617775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:27.431612 containerd[1578]: time="2025-10-30T00:10:27.431502778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:10:27.431762 containerd[1578]: time="2025-10-30T00:10:27.431561891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:10:27.432365 kubelet[2876]: E1030 00:10:27.432279 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:10:27.432588 kubelet[2876]: E1030 00:10:27.432456 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:10:27.433505 kubelet[2876]: E1030 00:10:27.433403 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-687d4c5f4-hq5zw_calico-system(ecf4255f-62f0-4818-8c75-902857f1c600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:27.434745 kubelet[2876]: E1030 00:10:27.434676 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:10:29.058777 containerd[1578]: time="2025-10-30T00:10:29.057929895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:10:29.269039 containerd[1578]: time="2025-10-30T00:10:29.268937986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:29.271100 containerd[1578]: time="2025-10-30T00:10:29.271043908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:10:29.271280 containerd[1578]: time="2025-10-30T00:10:29.271163115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:29.273256 kubelet[2876]: E1030 00:10:29.273199 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:10:29.273840 kubelet[2876]: E1030 00:10:29.273276 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:10:29.273840 kubelet[2876]: E1030 00:10:29.273540 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhzdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kskmh_calico-system(2a97c946-b833-49fb-b0be-330885d32847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:29.275239 kubelet[2876]: E1030 00:10:29.275186 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:30.058236 containerd[1578]: time="2025-10-30T00:10:30.058157852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:10:30.232747 containerd[1578]: time="2025-10-30T00:10:30.232655212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:30.234410 containerd[1578]: time="2025-10-30T00:10:30.234347958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:10:30.234558 containerd[1578]: time="2025-10-30T00:10:30.234473284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:10:30.235303 kubelet[2876]: E1030 00:10:30.235236 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:10:30.235409 kubelet[2876]: E1030 00:10:30.235319 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:10:30.235612 kubelet[2876]: E1030 00:10:30.235541 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8vgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-585ffdbd84-kh6p2_calico-system(f10b6c03-ea69-40d6-8304-f2729f28ebe7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:30.237378 kubelet[2876]: E1030 00:10:30.237317 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:10:31.056414 containerd[1578]: time="2025-10-30T00:10:31.056046294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:10:31.229678 containerd[1578]: time="2025-10-30T00:10:31.229590956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:31.231814 containerd[1578]: time="2025-10-30T00:10:31.231722965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:10:31.232377 containerd[1578]: time="2025-10-30T00:10:31.231878724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:31.233304 kubelet[2876]: E1030 00:10:31.232809 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:31.233304 kubelet[2876]: E1030 00:10:31.232882 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:31.235040 kubelet[2876]: E1030 00:10:31.234705 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gddmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f747677c9-bj47h_calico-apiserver(c462b67e-383a-4a79-a697-1a4848277370): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:31.236614 kubelet[2876]: E1030 00:10:31.236542 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:10:35.060308 containerd[1578]: time="2025-10-30T00:10:35.059251953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:10:35.227190 containerd[1578]: time="2025-10-30T00:10:35.227105071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:35.229768 containerd[1578]: time="2025-10-30T00:10:35.229684135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:10:35.230461 containerd[1578]: time="2025-10-30T00:10:35.229875292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:10:35.231125 kubelet[2876]: E1030 00:10:35.231052 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:10:35.233625 kubelet[2876]: E1030 00:10:35.231244 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:10:35.233625 kubelet[2876]: E1030 00:10:35.232518 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:35.235928 containerd[1578]: time="2025-10-30T00:10:35.233248870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:10:35.407354 containerd[1578]: time="2025-10-30T00:10:35.407190036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:35.409192 containerd[1578]: time="2025-10-30T00:10:35.409134110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:10:35.409340 containerd[1578]: time="2025-10-30T00:10:35.409258038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:35.409807 kubelet[2876]: E1030 00:10:35.409721 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:35.409937 kubelet[2876]: E1030 00:10:35.409860 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:35.412035 kubelet[2876]: E1030 00:10:35.410228 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpn7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f747677c9-846mn_calico-apiserver(143027e7-a13c-4c0d-bf53-591d4038e751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:35.412265 containerd[1578]: time="2025-10-30T00:10:35.411523202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:10:35.413751 kubelet[2876]: E1030 00:10:35.413493 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:10:35.610064 containerd[1578]: time="2025-10-30T00:10:35.609985171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:35.612178 containerd[1578]: time="2025-10-30T00:10:35.612109530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:10:35.612347 containerd[1578]: time="2025-10-30T00:10:35.612246763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:10:35.613398 kubelet[2876]: E1030 00:10:35.613319 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:10:35.613525 kubelet[2876]: E1030 00:10:35.613416 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:10:35.614314 kubelet[2876]: E1030 00:10:35.614237 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:35.615518 kubelet[2876]: E1030 00:10:35.615462 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:36.936448 systemd[1]: Started sshd@9-10.128.0.23:22-139.178.89.65:49916.service - OpenSSH per-connection server daemon (139.178.89.65:49916). Oct 30 00:10:37.274433 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 49916 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:10:37.276756 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:37.290715 systemd-logind[1548]: New session 10 of user core. Oct 30 00:10:37.296521 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:10:37.670324 sshd[4826]: Connection closed by 139.178.89.65 port 49916 Oct 30 00:10:37.673341 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:37.685665 systemd[1]: sshd@9-10.128.0.23:22-139.178.89.65:49916.service: Deactivated successfully. Oct 30 00:10:37.690767 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:10:37.697789 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:10:37.703772 systemd-logind[1548]: Removed session 10. Oct 30 00:10:40.056734 kubelet[2876]: E1030 00:10:40.056671 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:42.059407 kubelet[2876]: E1030 00:10:42.059328 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:10:42.061441 kubelet[2876]: E1030 00:10:42.060269 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:10:42.729587 systemd[1]: Started sshd@10-10.128.0.23:22-139.178.89.65:49930.service - OpenSSH per-connection server daemon (139.178.89.65:49930). Oct 30 00:10:43.070814 sshd[4850]: Accepted publickey for core from 139.178.89.65 port 49930 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:10:43.073600 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:43.082069 systemd-logind[1548]: New session 11 of user core. Oct 30 00:10:43.090460 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:10:43.438824 sshd[4853]: Connection closed by 139.178.89.65 port 49930 Oct 30 00:10:43.439948 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:43.454472 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:10:43.458375 systemd[1]: sshd@10-10.128.0.23:22-139.178.89.65:49930.service: Deactivated successfully. Oct 30 00:10:43.465766 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:10:43.471155 systemd-logind[1548]: Removed session 11. Oct 30 00:10:45.062984 kubelet[2876]: E1030 00:10:45.062910 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:10:47.058372 kubelet[2876]: E1030 00:10:47.058304 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:10:48.498210 systemd[1]: Started sshd@11-10.128.0.23:22-139.178.89.65:55182.service - OpenSSH per-connection server daemon (139.178.89.65:55182). Oct 30 00:10:48.829297 sshd[4867]: Accepted publickey for core from 139.178.89.65 port 55182 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:10:48.832378 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:48.842712 systemd-logind[1548]: New session 12 of user core. Oct 30 00:10:48.848401 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:10:49.062424 kubelet[2876]: E1030 00:10:49.062319 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:10:49.191984 sshd[4870]: Connection closed by 139.178.89.65 port 55182 Oct 30 00:10:49.195469 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:49.212757 systemd[1]: sshd@11-10.128.0.23:22-139.178.89.65:55182.service: Deactivated successfully. Oct 30 00:10:49.213347 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:10:49.218852 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:10:49.231711 systemd-logind[1548]: Removed session 12. Oct 30 00:10:49.256167 systemd[1]: Started sshd@12-10.128.0.23:22-139.178.89.65:55188.service - OpenSSH per-connection server daemon (139.178.89.65:55188). Oct 30 00:10:49.595539 sshd[4883]: Accepted publickey for core from 139.178.89.65 port 55188 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:10:49.599231 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:49.611562 systemd-logind[1548]: New session 13 of user core. Oct 30 00:10:49.621367 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:10:50.063210 sshd[4886]: Connection closed by 139.178.89.65 port 55188 Oct 30 00:10:50.066050 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:50.079106 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:10:50.080851 systemd[1]: sshd@12-10.128.0.23:22-139.178.89.65:55188.service: Deactivated successfully. Oct 30 00:10:50.088294 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:10:50.094674 systemd-logind[1548]: Removed session 13. Oct 30 00:10:50.130590 systemd[1]: Started sshd@13-10.128.0.23:22-139.178.89.65:55190.service - OpenSSH per-connection server daemon (139.178.89.65:55190). Oct 30 00:10:50.490295 sshd[4898]: Accepted publickey for core from 139.178.89.65 port 55190 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:10:50.492854 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:50.505943 systemd-logind[1548]: New session 14 of user core. Oct 30 00:10:50.513571 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:10:50.885425 containerd[1578]: time="2025-10-30T00:10:50.884909788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\" id:\"d2e97f0ffcbe67ca6ae3646b6abf492fd1ceb5002e8bc2e2b929deb0ee4fe1f0\" pid:4914 exited_at:{seconds:1761783050 nanos:883553546}" Oct 30 00:10:50.930120 sshd[4901]: Connection closed by 139.178.89.65 port 55190 Oct 30 00:10:50.934376 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:50.943078 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:10:50.944402 systemd[1]: sshd@13-10.128.0.23:22-139.178.89.65:55190.service: Deactivated successfully. Oct 30 00:10:50.950071 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:10:50.957137 systemd-logind[1548]: Removed session 14. Oct 30 00:10:51.060815 containerd[1578]: time="2025-10-30T00:10:51.060078515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:10:51.231281 containerd[1578]: time="2025-10-30T00:10:51.231196083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:51.233952 containerd[1578]: time="2025-10-30T00:10:51.233106420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:51.233952 containerd[1578]: time="2025-10-30T00:10:51.233115059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:10:51.234217 kubelet[2876]: E1030 00:10:51.233446 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:10:51.234217 kubelet[2876]: E1030 00:10:51.233514 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:10:51.234217 kubelet[2876]: E1030 00:10:51.233741 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhzdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kskmh_calico-system(2a97c946-b833-49fb-b0be-330885d32847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:51.235530 kubelet[2876]: E1030 00:10:51.235135 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:10:55.991694 systemd[1]: Started sshd@14-10.128.0.23:22-139.178.89.65:55194.service - OpenSSH per-connection server daemon (139.178.89.65:55194). Oct 30 00:10:56.057732 containerd[1578]: time="2025-10-30T00:10:56.057675421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:10:56.231046 containerd[1578]: time="2025-10-30T00:10:56.230764460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:56.232745 containerd[1578]: time="2025-10-30T00:10:56.232450568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:10:56.235063 containerd[1578]: time="2025-10-30T00:10:56.232716526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:10:56.235223 kubelet[2876]: E1030 00:10:56.234457 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:10:56.235223 kubelet[2876]: E1030 00:10:56.234536 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:10:56.235223 kubelet[2876]: E1030 00:10:56.234853 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8vgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-585ffdbd84-kh6p2_calico-system(f10b6c03-ea69-40d6-8304-f2729f28ebe7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:56.237758 containerd[1578]: time="2025-10-30T00:10:56.237264081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:10:56.237906 kubelet[2876]: E1030 00:10:56.237679 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:10:56.332419 sshd[4940]: Accepted publickey for core from 139.178.89.65 port 55194 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:10:56.334846 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:56.345379 systemd-logind[1548]: New session 15 of user core. Oct 30 00:10:56.352492 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:10:56.397730 containerd[1578]: time="2025-10-30T00:10:56.397491168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:56.399534 containerd[1578]: time="2025-10-30T00:10:56.399355132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:10:56.399534 containerd[1578]: time="2025-10-30T00:10:56.399474518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:10:56.401073 kubelet[2876]: E1030 00:10:56.399822 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:56.401334 kubelet[2876]: E1030 00:10:56.401287 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:10:56.401768 kubelet[2876]: E1030 00:10:56.401696 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gddmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f747677c9-bj47h_calico-apiserver(c462b67e-383a-4a79-a697-1a4848277370): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:56.402575 containerd[1578]: time="2025-10-30T00:10:56.402528963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:10:56.403492 kubelet[2876]: E1030 00:10:56.403449 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:10:56.571414 containerd[1578]: time="2025-10-30T00:10:56.571157523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:56.574040 containerd[1578]: time="2025-10-30T00:10:56.572875300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:10:56.574040 containerd[1578]: time="2025-10-30T00:10:56.572880008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:10:56.574554 kubelet[2876]: E1030 00:10:56.574497 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:10:56.574794 kubelet[2876]: E1030 00:10:56.574760 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:10:56.576243 kubelet[2876]: E1030 00:10:56.576162 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e3ec70ed95d466a96fa71158ea120e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-687d4c5f4-hq5zw_calico-system(ecf4255f-62f0-4818-8c75-902857f1c600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:56.580945 containerd[1578]: time="2025-10-30T00:10:56.579821525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:10:56.731586 sshd[4943]: Connection closed by 139.178.89.65 port 55194 Oct 30 00:10:56.733361 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:56.741271 systemd[1]: sshd@14-10.128.0.23:22-139.178.89.65:55194.service: Deactivated successfully. Oct 30 00:10:56.741874 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:10:56.748368 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:10:56.755762 systemd-logind[1548]: Removed session 15. Oct 30 00:10:56.788056 containerd[1578]: time="2025-10-30T00:10:56.787646211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:10:56.789537 containerd[1578]: time="2025-10-30T00:10:56.789327171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:10:56.789537 containerd[1578]: time="2025-10-30T00:10:56.789483660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:10:56.789919 kubelet[2876]: E1030 00:10:56.789805 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:10:56.789919 kubelet[2876]: E1030 00:10:56.789972 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:10:56.791207 kubelet[2876]: E1030 00:10:56.791124 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-687d4c5f4-hq5zw_calico-system(ecf4255f-62f0-4818-8c75-902857f1c600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:10:56.792777 kubelet[2876]: E1030 00:10:56.792603 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:11:01.059066 containerd[1578]: time="2025-10-30T00:11:01.058178338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:11:01.304477 containerd[1578]: time="2025-10-30T00:11:01.304170138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:11:01.306177 containerd[1578]: time="2025-10-30T00:11:01.306116982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:11:01.308035 containerd[1578]: time="2025-10-30T00:11:01.306320462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:11:01.308778 kubelet[2876]: E1030 00:11:01.308411 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:11:01.308778 kubelet[2876]: E1030 00:11:01.308486 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:11:01.308778 kubelet[2876]: E1030 00:11:01.308692 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpn7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f747677c9-846mn_calico-apiserver(143027e7-a13c-4c0d-bf53-591d4038e751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:11:01.310912 kubelet[2876]: E1030 00:11:01.310755 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:11:01.790065 systemd[1]: Started sshd@15-10.128.0.23:22-139.178.89.65:58912.service - OpenSSH per-connection server daemon (139.178.89.65:58912). Oct 30 00:11:02.128945 sshd[4963]: Accepted publickey for core from 139.178.89.65 port 58912 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:02.133345 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:02.149132 systemd-logind[1548]: New session 16 of user core. Oct 30 00:11:02.155272 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:11:02.486044 sshd[4966]: Connection closed by 139.178.89.65 port 58912 Oct 30 00:11:02.486890 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:02.495720 systemd[1]: sshd@15-10.128.0.23:22-139.178.89.65:58912.service: Deactivated successfully. Oct 30 00:11:02.501489 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:11:02.503864 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:11:02.507712 systemd-logind[1548]: Removed session 16. Oct 30 00:11:04.057069 containerd[1578]: time="2025-10-30T00:11:04.056851908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:11:04.327157 containerd[1578]: time="2025-10-30T00:11:04.326603182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:11:04.328487 containerd[1578]: time="2025-10-30T00:11:04.328422769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:11:04.328639 containerd[1578]: time="2025-10-30T00:11:04.328446473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:11:04.328973 kubelet[2876]: E1030 00:11:04.328899 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:11:04.329524 kubelet[2876]: E1030 00:11:04.328989 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:11:04.329524 kubelet[2876]: E1030 00:11:04.329413 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:11:04.332573 containerd[1578]: time="2025-10-30T00:11:04.332468641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:11:04.532404 containerd[1578]: time="2025-10-30T00:11:04.532286998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:11:04.537491 containerd[1578]: time="2025-10-30T00:11:04.537382406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:11:04.537787 containerd[1578]: time="2025-10-30T00:11:04.537619372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:11:04.538358 kubelet[2876]: E1030 00:11:04.538234 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:11:04.538358 kubelet[2876]: E1030 00:11:04.538310 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:11:04.539304 kubelet[2876]: E1030 00:11:04.538516 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8f55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vsw6q_calico-system(c0987c36-521e-441e-a4df-01b4de4064f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:11:04.540184 kubelet[2876]: E1030 00:11:04.539875 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:11:07.057909 kubelet[2876]: E1030 00:11:07.057823 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:11:07.548677 systemd[1]: Started sshd@16-10.128.0.23:22-139.178.89.65:40228.service - OpenSSH per-connection server daemon (139.178.89.65:40228). Oct 30 00:11:07.889273 sshd[4981]: Accepted publickey for core from 139.178.89.65 port 40228 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:07.892355 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:07.905545 systemd-logind[1548]: New session 17 of user core. Oct 30 00:11:07.916294 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:11:08.058463 kubelet[2876]: E1030 00:11:08.058376 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:11:08.265123 sshd[4984]: Connection closed by 139.178.89.65 port 40228 Oct 30 00:11:08.266378 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:08.276482 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:11:08.277861 systemd[1]: sshd@16-10.128.0.23:22-139.178.89.65:40228.service: Deactivated successfully. Oct 30 00:11:08.283824 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:11:08.288339 systemd-logind[1548]: Removed session 17. Oct 30 00:11:09.065098 kubelet[2876]: E1030 00:11:09.064981 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:11:10.058300 kubelet[2876]: E1030 00:11:10.058224 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:11:13.333254 systemd[1]: Started sshd@17-10.128.0.23:22-139.178.89.65:40242.service - OpenSSH per-connection server daemon (139.178.89.65:40242). Oct 30 00:11:13.667252 sshd[4996]: Accepted publickey for core from 139.178.89.65 port 40242 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:13.672487 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:13.685005 systemd-logind[1548]: New session 18 of user core. Oct 30 00:11:13.695276 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:11:14.026043 sshd[4999]: Connection closed by 139.178.89.65 port 40242 Oct 30 00:11:14.027337 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:14.037795 systemd[1]: sshd@17-10.128.0.23:22-139.178.89.65:40242.service: Deactivated successfully. Oct 30 00:11:14.043414 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:11:14.046142 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:11:14.050427 systemd-logind[1548]: Removed session 18. Oct 30 00:11:14.057673 kubelet[2876]: E1030 00:11:14.057584 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:11:14.088651 systemd[1]: Started sshd@18-10.128.0.23:22-139.178.89.65:40250.service - OpenSSH per-connection server daemon (139.178.89.65:40250). Oct 30 00:11:14.427442 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 40250 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:14.430214 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:14.442065 systemd-logind[1548]: New session 19 of user core. Oct 30 00:11:14.450531 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:11:14.897607 sshd[5014]: Connection closed by 139.178.89.65 port 40250 Oct 30 00:11:14.898735 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:14.912146 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:11:14.912395 systemd[1]: sshd@18-10.128.0.23:22-139.178.89.65:40250.service: Deactivated successfully. Oct 30 00:11:14.920392 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:11:14.929298 systemd-logind[1548]: Removed session 19. Oct 30 00:11:14.953663 systemd[1]: Started sshd@19-10.128.0.23:22-139.178.89.65:40258.service - OpenSSH per-connection server daemon (139.178.89.65:40258). Oct 30 00:11:15.284823 sshd[5024]: Accepted publickey for core from 139.178.89.65 port 40258 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:15.287631 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:15.296800 systemd-logind[1548]: New session 20 of user core. Oct 30 00:11:15.304551 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:11:16.685216 sshd[5027]: Connection closed by 139.178.89.65 port 40258 Oct 30 00:11:16.686373 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:16.703719 systemd[1]: sshd@19-10.128.0.23:22-139.178.89.65:40258.service: Deactivated successfully. Oct 30 00:11:16.705113 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:11:16.715990 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:11:16.725599 systemd-logind[1548]: Removed session 20. Oct 30 00:11:16.752673 systemd[1]: Started sshd@20-10.128.0.23:22-139.178.89.65:38240.service - OpenSSH per-connection server daemon (139.178.89.65:38240). Oct 30 00:11:17.112414 sshd[5042]: Accepted publickey for core from 139.178.89.65 port 38240 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:17.118093 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:17.134134 systemd-logind[1548]: New session 21 of user core. Oct 30 00:11:17.139699 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:11:17.746208 sshd[5047]: Connection closed by 139.178.89.65 port 38240 Oct 30 00:11:17.748663 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:17.763557 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:11:17.766253 systemd[1]: sshd@20-10.128.0.23:22-139.178.89.65:38240.service: Deactivated successfully. Oct 30 00:11:17.772443 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:11:17.784097 systemd-logind[1548]: Removed session 21. Oct 30 00:11:17.813343 systemd[1]: Started sshd@21-10.128.0.23:22-139.178.89.65:38256.service - OpenSSH per-connection server daemon (139.178.89.65:38256). Oct 30 00:11:18.062285 kubelet[2876]: E1030 00:11:18.060834 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:11:18.171068 sshd[5057]: Accepted publickey for core from 139.178.89.65 port 38256 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:18.174931 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:18.188356 systemd-logind[1548]: New session 22 of user core. Oct 30 00:11:18.197356 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:11:18.547408 sshd[5060]: Connection closed by 139.178.89.65 port 38256 Oct 30 00:11:18.545371 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:18.562650 systemd[1]: sshd@21-10.128.0.23:22-139.178.89.65:38256.service: Deactivated successfully. Oct 30 00:11:18.569576 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:11:18.575751 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:11:18.578358 systemd-logind[1548]: Removed session 22. Oct 30 00:11:19.061763 kubelet[2876]: E1030 00:11:19.061695 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:11:20.755079 containerd[1578]: time="2025-10-30T00:11:20.754955702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a0adea01dfb6be18e66eb0d3d7fbbbd7ad64e31bf5f97a6b13913737d44dc1\" id:\"48ae917aa7d4221c00840df7cbfc14454d5bf682238a8d74cc68430ca3adab1f\" pid:5084 exited_at:{seconds:1761783080 nanos:754145061}" Oct 30 00:11:22.056748 kubelet[2876]: E1030 00:11:22.056681 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:11:23.064195 kubelet[2876]: E1030 00:11:23.064126 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:11:23.064892 kubelet[2876]: E1030 00:11:23.064431 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7" Oct 30 00:11:23.607876 systemd[1]: Started sshd@22-10.128.0.23:22-139.178.89.65:38272.service - OpenSSH per-connection server daemon (139.178.89.65:38272). Oct 30 00:11:23.946221 sshd[5100]: Accepted publickey for core from 139.178.89.65 port 38272 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:23.949661 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:23.962221 systemd-logind[1548]: New session 23 of user core. Oct 30 00:11:23.971743 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:11:24.380195 sshd[5103]: Connection closed by 139.178.89.65 port 38272 Oct 30 00:11:24.382674 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:24.392600 systemd[1]: sshd@22-10.128.0.23:22-139.178.89.65:38272.service: Deactivated successfully. Oct 30 00:11:24.398848 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:11:24.404702 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:11:24.409418 systemd-logind[1548]: Removed session 23. Oct 30 00:11:26.060051 kubelet[2876]: E1030 00:11:26.058456 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:11:29.438483 systemd[1]: Started sshd@23-10.128.0.23:22-139.178.89.65:57560.service - OpenSSH per-connection server daemon (139.178.89.65:57560). Oct 30 00:11:29.765060 sshd[5117]: Accepted publickey for core from 139.178.89.65 port 57560 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:29.768383 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:29.783123 systemd-logind[1548]: New session 24 of user core. Oct 30 00:11:29.787277 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:11:30.058967 kubelet[2876]: E1030 00:11:30.058743 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vsw6q" podUID="c0987c36-521e-441e-a4df-01b4de4064f7" Oct 30 00:11:30.155038 sshd[5120]: Connection closed by 139.178.89.65 port 57560 Oct 30 00:11:30.157429 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:30.167908 systemd[1]: sshd@23-10.128.0.23:22-139.178.89.65:57560.service: Deactivated successfully. Oct 30 00:11:30.173827 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:11:30.180525 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:11:30.182987 systemd-logind[1548]: Removed session 24. Oct 30 00:11:32.056823 containerd[1578]: time="2025-10-30T00:11:32.056661864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:11:32.217048 containerd[1578]: time="2025-10-30T00:11:32.216239623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:11:32.217939 containerd[1578]: time="2025-10-30T00:11:32.217827049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:11:32.218101 containerd[1578]: time="2025-10-30T00:11:32.217870631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:11:32.218376 kubelet[2876]: E1030 00:11:32.218317 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:11:32.218885 kubelet[2876]: E1030 00:11:32.218404 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:11:32.218885 kubelet[2876]: E1030 00:11:32.218639 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhzdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kskmh_calico-system(2a97c946-b833-49fb-b0be-330885d32847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:11:32.220179 kubelet[2876]: E1030 00:11:32.220118 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kskmh" podUID="2a97c946-b833-49fb-b0be-330885d32847" Oct 30 00:11:34.055955 kubelet[2876]: E1030 00:11:34.055878 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-bj47h" podUID="c462b67e-383a-4a79-a697-1a4848277370" Oct 30 00:11:35.060720 kubelet[2876]: E1030 00:11:35.059980 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687d4c5f4-hq5zw" podUID="ecf4255f-62f0-4818-8c75-902857f1c600" Oct 30 00:11:35.218706 systemd[1]: Started sshd@24-10.128.0.23:22-139.178.89.65:57572.service - OpenSSH per-connection server daemon (139.178.89.65:57572). Oct 30 00:11:35.565675 sshd[5133]: Accepted publickey for core from 139.178.89.65 port 57572 ssh2: RSA SHA256:cd4BroKjj9biZXF9zqqwXFm4iGhXl03Qh7zF4IiT+a4 Oct 30 00:11:35.567400 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:11:35.576094 systemd-logind[1548]: New session 25 of user core. Oct 30 00:11:35.584277 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 00:11:35.905151 sshd[5139]: Connection closed by 139.178.89.65 port 57572 Oct 30 00:11:35.906003 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Oct 30 00:11:35.917151 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Oct 30 00:11:35.918544 systemd[1]: sshd@24-10.128.0.23:22-139.178.89.65:57572.service: Deactivated successfully. Oct 30 00:11:35.926445 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 00:11:35.933852 systemd-logind[1548]: Removed session 25. Oct 30 00:11:37.060330 kubelet[2876]: E1030 00:11:37.060232 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f747677c9-846mn" podUID="143027e7-a13c-4c0d-bf53-591d4038e751" Oct 30 00:11:38.058258 containerd[1578]: time="2025-10-30T00:11:38.058193064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:11:38.252045 containerd[1578]: time="2025-10-30T00:11:38.251327863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:11:38.253248 containerd[1578]: time="2025-10-30T00:11:38.253182439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:11:38.253387 containerd[1578]: time="2025-10-30T00:11:38.253319719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:11:38.253748 kubelet[2876]: E1030 00:11:38.253657 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:11:38.257359 kubelet[2876]: E1030 00:11:38.253756 2876 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:11:38.257359 kubelet[2876]: E1030 00:11:38.254155 2876 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8vgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-585ffdbd84-kh6p2_calico-system(f10b6c03-ea69-40d6-8304-f2729f28ebe7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:11:38.257359 kubelet[2876]: E1030 00:11:38.255530 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-585ffdbd84-kh6p2" podUID="f10b6c03-ea69-40d6-8304-f2729f28ebe7"