Dec 12 18:43:14.093321 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:43:14.093363 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:43:14.093386 kernel: BIOS-provided physical RAM map: Dec 12 18:43:14.093401 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 12 18:43:14.093413 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 12 18:43:14.093427 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 12 18:43:14.093444 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 12 18:43:14.093460 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 12 18:43:14.093474 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Dec 12 18:43:14.093493 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Dec 12 18:43:14.093507 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Dec 12 18:43:14.093522 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 12 18:43:14.093537 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 12 18:43:14.093552 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 12 18:43:14.093570 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 12 18:43:14.093629 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 12 18:43:14.093644 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 12 18:43:14.093659 kernel: NX (Execute Disable) protection: active Dec 12 18:43:14.093675 kernel: APIC: Static calls initialized Dec 12 18:43:14.093690 kernel: efi: EFI v2.7 by EDK II Dec 12 18:43:14.093706 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Dec 12 18:43:14.093722 kernel: random: crng init done Dec 12 18:43:14.093737 kernel: secureboot: Secure boot disabled Dec 12 18:43:14.093753 kernel: SMBIOS 2.4 present. Dec 12 18:43:14.093768 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Dec 12 18:43:14.093788 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:43:14.093803 kernel: Hypervisor detected: KVM Dec 12 18:43:14.093820 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 12 18:43:14.093836 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:43:14.093852 kernel: kvm-clock: using sched offset of 15280016335 cycles Dec 12 18:43:14.093870 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:43:14.093887 kernel: tsc: Detected 2299.998 MHz processor Dec 12 18:43:14.093903 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:43:14.093919 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:43:14.093934 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 12 18:43:14.093954 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 12 18:43:14.093970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:43:14.093987 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 12 18:43:14.094003 kernel: Using GB pages for direct mapping Dec 12 18:43:14.094019 kernel: ACPI: Early table checksum verification disabled Dec 12 18:43:14.094042 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 12 18:43:14.094060 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 12 18:43:14.094081 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 12 18:43:14.094099 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 12 18:43:14.094116 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 12 18:43:14.094134 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Dec 12 18:43:14.094151 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 12 18:43:14.094179 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 12 18:43:14.094197 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 12 18:43:14.094217 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 12 18:43:14.094235 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 12 18:43:14.094252 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 12 18:43:14.094270 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 12 18:43:14.094287 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 12 18:43:14.094305 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 12 18:43:14.094322 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 12 18:43:14.094340 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 12 18:43:14.094357 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 12 18:43:14.094377 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 12 18:43:14.094395 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 12 18:43:14.094412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 18:43:14.094429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 12 18:43:14.094447 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 12 18:43:14.094465 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Dec 12 18:43:14.094483 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Dec 12 18:43:14.094500 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Dec 12 18:43:14.094518 kernel: Zone ranges: Dec 12 18:43:14.094538 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:43:14.094557 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:43:14.094574 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 12 18:43:14.094662 kernel: Device empty Dec 12 18:43:14.094680 kernel: Movable zone start for each node Dec 12 18:43:14.094697 kernel: Early memory node ranges Dec 12 18:43:14.094713 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 12 18:43:14.094730 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 12 18:43:14.094747 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Dec 12 18:43:14.094768 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Dec 12 18:43:14.094785 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 12 18:43:14.094802 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 12 18:43:14.094819 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 12 18:43:14.094835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:43:14.094852 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 12 18:43:14.094868 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 12 18:43:14.094884 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 12 18:43:14.094902 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 12 18:43:14.094922 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 12 18:43:14.094938 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 12 18:43:14.094955 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:43:14.094972 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:43:14.094989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:43:14.095007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:43:14.095025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:43:14.095042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:43:14.095059 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:43:14.095080 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:43:14.095098 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:43:14.095115 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:43:14.095133 kernel: CPU topo: Max. threads per core: 2 Dec 12 18:43:14.095150 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:43:14.095176 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:43:14.095193 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:43:14.095209 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 12 18:43:14.095226 kernel: Booting paravirtualized kernel on KVM Dec 12 18:43:14.095244 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:43:14.095265 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:43:14.095282 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:43:14.095299 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:43:14.095315 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:43:14.095333 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:43:14.095351 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:43:14.095370 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:43:14.095387 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 18:43:14.095408 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:43:14.095425 kernel: Fallback order for Node 0: 0 Dec 12 18:43:14.095442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Dec 12 18:43:14.095458 kernel: Policy zone: Normal Dec 12 18:43:14.095476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:43:14.095494 kernel: software IO TLB: area num 2. Dec 12 18:43:14.095523 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:43:14.095545 kernel: Kernel/User page tables isolation: enabled Dec 12 18:43:14.095563 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:43:14.095598 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:43:14.095617 kernel: Dynamic Preempt: voluntary Dec 12 18:43:14.095635 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:43:14.095657 kernel: rcu: RCU event tracing is enabled. Dec 12 18:43:14.095674 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:43:14.095692 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:43:14.095711 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:43:14.095729 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:43:14.095751 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:43:14.095770 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:43:14.095789 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:43:14.095806 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:43:14.095825 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:43:14.095844 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:43:14.095861 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:43:14.095879 kernel: Console: colour dummy device 80x25 Dec 12 18:43:14.095901 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:43:14.095920 kernel: ACPI: Core revision 20240827 Dec 12 18:43:14.095937 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:43:14.095955 kernel: x2apic enabled Dec 12 18:43:14.095975 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:43:14.095992 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 12 18:43:14.096010 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 12 18:43:14.096029 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 12 18:43:14.096048 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 12 18:43:14.096066 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 12 18:43:14.096089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:43:14.096108 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 12 18:43:14.096125 kernel: Spectre V2 : Mitigation: IBRS Dec 12 18:43:14.096143 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:43:14.096170 kernel: RETBleed: Mitigation: IBRS Dec 12 18:43:14.096188 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:43:14.096207 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 12 18:43:14.096225 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:43:14.096247 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 18:43:14.096266 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:43:14.096284 kernel: active return thunk: its_return_thunk Dec 12 18:43:14.096302 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:43:14.096321 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:43:14.096338 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:43:14.096357 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:43:14.096374 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:43:14.096391 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 18:43:14.096413 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:43:14.096431 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:43:14.096450 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:43:14.096469 kernel: landlock: Up and running. Dec 12 18:43:14.096488 kernel: SELinux: Initializing. Dec 12 18:43:14.096507 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:43:14.096526 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:43:14.096546 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 12 18:43:14.096565 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 12 18:43:14.096622 kernel: signal: max sigframe size: 1776 Dec 12 18:43:14.096640 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:43:14.096656 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:43:14.096673 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:43:14.096691 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:43:14.096709 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:43:14.096727 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:43:14.096744 kernel: .... node #0, CPUs: #1 Dec 12 18:43:14.096762 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 12 18:43:14.096787 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 12 18:43:14.096804 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:43:14.096820 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 12 18:43:14.096837 kernel: Memory: 7556056K/7860544K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 298912K reserved, 0K cma-reserved) Dec 12 18:43:14.096856 kernel: devtmpfs: initialized Dec 12 18:43:14.096872 kernel: x86/mm: Memory block size: 128MB Dec 12 18:43:14.096889 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 12 18:43:14.096907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:43:14.096928 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:43:14.096946 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:43:14.096963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:43:14.096981 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:43:14.096999 kernel: audit: type=2000 audit(1765564989.836:1): state=initialized audit_enabled=0 res=1 Dec 12 18:43:14.097016 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:43:14.097034 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:43:14.097053 kernel: cpuidle: using governor menu Dec 12 18:43:14.097069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:43:14.097091 kernel: dca service started, version 1.12.1 Dec 12 18:43:14.097110 kernel: PCI: Using configuration type 1 for base access Dec 12 18:43:14.097127 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:43:14.097144 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:43:14.097171 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:43:14.097188 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:43:14.097207 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:43:14.097225 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:43:14.097242 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:43:14.097264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:43:14.097281 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 12 18:43:14.097300 kernel: ACPI: Interpreter enabled Dec 12 18:43:14.097318 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:43:14.097337 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:43:14.097354 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:43:14.097372 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 12 18:43:14.097390 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 12 18:43:14.097408 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:43:14.097677 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:43:14.097910 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:43:14.099825 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:43:14.099853 kernel: PCI host bridge to bus 0000:00 Dec 12 18:43:14.100053 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:43:14.100277 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:43:14.100461 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:43:14.100652 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 12 18:43:14.100815 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:43:14.101014 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:43:14.101220 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:43:14.101408 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 12 18:43:14.103628 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 12 18:43:14.103873 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Dec 12 18:43:14.104071 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:43:14.104273 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Dec 12 18:43:14.104469 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:43:14.104730 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:43:14.104919 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Dec 12 18:43:14.105121 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:43:14.105315 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Dec 12 18:43:14.105502 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Dec 12 18:43:14.105527 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:43:14.105547 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:43:14.105566 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:43:14.105598 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:43:14.105617 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:43:14.105641 kernel: iommu: Default domain type: Translated Dec 12 18:43:14.105659 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:43:14.105678 kernel: efivars: Registered efivars operations Dec 12 18:43:14.105697 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:43:14.105716 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:43:14.105734 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 12 18:43:14.105753 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 12 18:43:14.105772 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Dec 12 18:43:14.105790 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 12 18:43:14.105811 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 12 18:43:14.105830 kernel: vgaarb: loaded Dec 12 18:43:14.105849 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:43:14.105868 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:43:14.105886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:43:14.105905 kernel: pnp: PnP ACPI init Dec 12 18:43:14.105923 kernel: pnp: PnP ACPI: found 7 devices Dec 12 18:43:14.105941 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:43:14.105960 kernel: NET: Registered PF_INET protocol family Dec 12 18:43:14.105983 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:43:14.106001 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 12 18:43:14.106020 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:43:14.106039 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:43:14.106057 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 18:43:14.106075 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 12 18:43:14.106094 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 12 18:43:14.106112 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 12 18:43:14.106128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:43:14.106150 kernel: NET: Registered PF_XDP protocol family Dec 12 18:43:14.106340 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:43:14.106512 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:43:14.107827 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:43:14.108009 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 12 18:43:14.108213 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:43:14.108239 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:43:14.108264 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:43:14.108284 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 12 18:43:14.108304 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:43:14.108323 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 12 18:43:14.108342 kernel: clocksource: Switched to clocksource tsc Dec 12 18:43:14.108361 kernel: Initialise system trusted keyrings Dec 12 18:43:14.108379 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 12 18:43:14.108398 kernel: Key type asymmetric registered Dec 12 18:43:14.108416 kernel: Asymmetric key parser 'x509' registered Dec 12 18:43:14.108439 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:43:14.108458 kernel: io scheduler mq-deadline registered Dec 12 18:43:14.108476 kernel: io scheduler kyber registered Dec 12 18:43:14.108495 kernel: io scheduler bfq registered Dec 12 18:43:14.108514 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:43:14.108534 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 12 18:43:14.110773 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 12 18:43:14.110805 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 12 18:43:14.111010 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 12 18:43:14.111042 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 12 18:43:14.111254 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 12 18:43:14.111282 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:43:14.111302 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:43:14.111321 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 12 18:43:14.111340 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 12 18:43:14.111359 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 12 18:43:14.111568 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 12 18:43:14.111635 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:43:14.111652 kernel: i8042: Warning: Keylock active Dec 12 18:43:14.111669 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:43:14.111686 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:43:14.111885 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 12 18:43:14.112059 kernel: rtc_cmos 00:00: registered as rtc0 Dec 12 18:43:14.112240 kernel: rtc_cmos 00:00: setting system clock to 2025-12-12T18:43:13 UTC (1765564993) Dec 12 18:43:14.112407 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 12 18:43:14.112435 kernel: intel_pstate: CPU model not supported Dec 12 18:43:14.112453 kernel: pstore: Using crash dump compression: deflate Dec 12 18:43:14.112471 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:43:14.112489 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:43:14.112507 kernel: Segment Routing with IPv6 Dec 12 18:43:14.112524 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:43:14.112542 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:43:14.112560 kernel: Key type dns_resolver registered Dec 12 18:43:14.112577 kernel: IPI shorthand broadcast: enabled Dec 12 18:43:14.113645 kernel: sched_clock: Marking stable (3845005015, 179381051)->(4155818144, -131432078) Dec 12 18:43:14.113666 kernel: registered taskstats version 1 Dec 12 18:43:14.113685 kernel: Loading compiled-in X.509 certificates Dec 12 18:43:14.113705 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:43:14.113724 kernel: Demotion targets for Node 0: null Dec 12 18:43:14.113744 kernel: Key type .fscrypt registered Dec 12 18:43:14.113762 kernel: Key type fscrypt-provisioning registered Dec 12 18:43:14.113781 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:43:14.113800 kernel: ima: No architecture policies found Dec 12 18:43:14.113822 kernel: clk: Disabling unused clocks Dec 12 18:43:14.113841 kernel: Warning: unable to open an initial console. Dec 12 18:43:14.113860 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:43:14.113880 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:43:14.113899 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:43:14.113918 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 12 18:43:14.113937 kernel: Run /init as init process Dec 12 18:43:14.113957 kernel: with arguments: Dec 12 18:43:14.113975 kernel: /init Dec 12 18:43:14.113997 kernel: with environment: Dec 12 18:43:14.114016 kernel: HOME=/ Dec 12 18:43:14.114035 kernel: TERM=linux Dec 12 18:43:14.114055 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:43:14.114080 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:43:14.114100 systemd[1]: Detected virtualization google. Dec 12 18:43:14.114120 systemd[1]: Detected architecture x86-64. Dec 12 18:43:14.114143 systemd[1]: Running in initrd. Dec 12 18:43:14.114171 systemd[1]: No hostname configured, using default hostname. Dec 12 18:43:14.114192 systemd[1]: Hostname set to . Dec 12 18:43:14.114212 systemd[1]: Initializing machine ID from random generator. Dec 12 18:43:14.114232 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:43:14.114252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:43:14.114292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:43:14.114317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:43:14.114339 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:43:14.114359 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:43:14.114382 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:43:14.114404 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:43:14.114427 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:43:14.114445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:43:14.114466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:43:14.114487 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:43:14.114508 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:43:14.114529 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:43:14.114549 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:43:14.114570 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:43:14.115633 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:43:14.115665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:43:14.115686 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:43:14.115707 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:43:14.115728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:43:14.115748 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:43:14.115769 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:43:14.115790 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:43:14.115811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:43:14.115831 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:43:14.115859 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:43:14.115881 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:43:14.115901 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:43:14.115921 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:43:14.115942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:43:14.115996 systemd-journald[192]: Collecting audit messages is disabled. Dec 12 18:43:14.116043 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:43:14.116065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:43:14.116089 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:43:14.116111 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:43:14.116136 systemd-journald[192]: Journal started Dec 12 18:43:14.116187 systemd-journald[192]: Runtime Journal (/run/log/journal/7f61199d5f7c41fda36a8968589551b7) is 8M, max 148.6M, 140.6M free. Dec 12 18:43:14.123611 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:43:14.130444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:43:14.133249 systemd-modules-load[194]: Inserted module 'overlay' Dec 12 18:43:14.137747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:43:14.142781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:43:14.169357 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:43:14.173821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:43:14.175944 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:43:14.197605 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:43:14.199997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:43:14.201788 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 12 18:43:14.202605 kernel: Bridge firewalling registered Dec 12 18:43:14.203396 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:43:14.211966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:43:14.212498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:43:14.220161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:43:14.229649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:43:14.247269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:43:14.254169 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:43:14.261882 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:43:14.320742 systemd-resolved[237]: Positive Trust Anchors: Dec 12 18:43:14.321291 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:43:14.321533 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:43:14.327735 systemd-resolved[237]: Defaulting to hostname 'linux'. Dec 12 18:43:14.331004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:43:14.343812 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:43:14.391626 kernel: SCSI subsystem initialized Dec 12 18:43:14.403616 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:43:14.414615 kernel: iscsi: registered transport (tcp) Dec 12 18:43:14.439619 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:43:14.439668 kernel: QLogic iSCSI HBA Driver Dec 12 18:43:14.462819 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:43:14.483096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:43:14.491863 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:43:14.557220 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:43:14.566247 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:43:14.625625 kernel: raid6: avx2x4 gen() 18336 MB/s Dec 12 18:43:14.642620 kernel: raid6: avx2x2 gen() 17740 MB/s Dec 12 18:43:14.660136 kernel: raid6: avx2x1 gen() 13650 MB/s Dec 12 18:43:14.660194 kernel: raid6: using algorithm avx2x4 gen() 18336 MB/s Dec 12 18:43:14.677965 kernel: raid6: .... xor() 8217 MB/s, rmw enabled Dec 12 18:43:14.678007 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:43:14.700621 kernel: xor: automatically using best checksumming function avx Dec 12 18:43:14.885627 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:43:14.893929 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:43:14.897105 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:43:14.932879 systemd-udevd[439]: Using default interface naming scheme 'v255'. Dec 12 18:43:14.942527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:43:14.944038 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:43:14.973904 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Dec 12 18:43:15.007541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:43:15.015573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:43:15.105933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:43:15.114243 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:43:15.201860 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:43:15.222610 kernel: AES CTR mode by8 optimization enabled Dec 12 18:43:15.245566 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Dec 12 18:43:15.255419 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:43:15.261617 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 12 18:43:15.335112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:43:15.335336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:43:15.340212 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:43:15.346911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:43:15.355782 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 12 18:43:15.349520 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:43:15.359949 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Dec 12 18:43:15.360324 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 12 18:43:15.364739 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 12 18:43:15.365084 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 12 18:43:15.365367 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:43:15.382788 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:43:15.382834 kernel: GPT:17805311 != 33554431 Dec 12 18:43:15.382862 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:43:15.382886 kernel: GPT:17805311 != 33554431 Dec 12 18:43:15.382920 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:43:15.382946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:43:15.385541 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 12 18:43:15.404212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:43:15.478767 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 12 18:43:15.482048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:43:15.497026 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 12 18:43:15.517215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 12 18:43:15.517473 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 12 18:43:15.537324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 12 18:43:15.540986 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:43:15.545816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:43:15.550797 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:43:15.555936 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:43:15.563774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:43:15.577543 disk-uuid[592]: Primary Header is updated. Dec 12 18:43:15.577543 disk-uuid[592]: Secondary Entries is updated. Dec 12 18:43:15.577543 disk-uuid[592]: Secondary Header is updated. Dec 12 18:43:15.590608 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:43:15.596820 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:43:15.606606 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:43:16.624609 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:43:16.625036 disk-uuid[593]: The operation has completed successfully. Dec 12 18:43:16.705537 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:43:16.705730 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:43:16.751469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:43:16.772694 sh[614]: Success Dec 12 18:43:16.793887 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:43:16.793966 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:43:16.795290 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:43:16.806628 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 12 18:43:16.877052 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:43:16.882699 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:43:16.898346 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:43:16.914616 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (626) Dec 12 18:43:16.917539 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:43:16.917601 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:43:16.943031 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:43:16.943079 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:43:16.943104 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:43:16.947678 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:43:16.951234 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:43:16.954116 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:43:16.956144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:43:16.965842 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:43:17.011973 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (659) Dec 12 18:43:17.012031 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:43:17.012058 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:43:17.021726 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:43:17.021775 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:43:17.021808 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:43:17.027630 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:43:17.030850 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:43:17.039004 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:43:17.139809 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:43:17.144332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:43:17.276551 systemd-networkd[795]: lo: Link UP Dec 12 18:43:17.276565 systemd-networkd[795]: lo: Gained carrier Dec 12 18:43:17.278818 systemd-networkd[795]: Enumeration completed Dec 12 18:43:17.278955 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:43:17.283038 systemd[1]: Reached target network.target - Network. Dec 12 18:43:17.284255 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:17.284262 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:43:17.285861 systemd-networkd[795]: eth0: Link UP Dec 12 18:43:17.307478 ignition[718]: Ignition 2.22.0 Dec 12 18:43:17.286520 systemd-networkd[795]: eth0: Gained carrier Dec 12 18:43:17.307492 ignition[718]: Stage: fetch-offline Dec 12 18:43:17.286536 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:17.307546 ignition[718]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:17.302811 systemd-networkd[795]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 12 18:43:17.307565 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:17.311968 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:43:17.307733 ignition[718]: parsed url from cmdline: "" Dec 12 18:43:17.315451 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:43:17.307742 ignition[718]: no config URL provided Dec 12 18:43:17.307753 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:43:17.307920 ignition[718]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:43:17.307934 ignition[718]: failed to fetch config: resource requires networking Dec 12 18:43:17.309299 ignition[718]: Ignition finished successfully Dec 12 18:43:17.355776 ignition[804]: Ignition 2.22.0 Dec 12 18:43:17.355792 ignition[804]: Stage: fetch Dec 12 18:43:17.356003 ignition[804]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:17.356019 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:17.363736 unknown[804]: fetched base config from "system" Dec 12 18:43:17.356204 ignition[804]: parsed url from cmdline: "" Dec 12 18:43:17.363744 unknown[804]: fetched base config from "system" Dec 12 18:43:17.356211 ignition[804]: no config URL provided Dec 12 18:43:17.363750 unknown[804]: fetched user config from "gcp" Dec 12 18:43:17.356221 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:43:17.366628 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:43:17.356235 ignition[804]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:43:17.370005 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:43:17.356273 ignition[804]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 12 18:43:17.359214 ignition[804]: GET result: OK Dec 12 18:43:17.359331 ignition[804]: parsing config with SHA512: 2fd80963c3b23ee9e9ff257f5a4044e4d1ff8c2602212be2461ceba41458076d9301e441b13e70c5bfb86bd37d8c1a05ad16ad3f10bfe7cb391ad9ed9d01b528 Dec 12 18:43:17.364064 ignition[804]: fetch: fetch complete Dec 12 18:43:17.364070 ignition[804]: fetch: fetch passed Dec 12 18:43:17.364116 ignition[804]: Ignition finished successfully Dec 12 18:43:17.420719 ignition[810]: Ignition 2.22.0 Dec 12 18:43:17.420735 ignition[810]: Stage: kargs Dec 12 18:43:17.424128 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:43:17.420974 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:17.428997 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:43:17.420991 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:17.422204 ignition[810]: kargs: kargs passed Dec 12 18:43:17.422254 ignition[810]: Ignition finished successfully Dec 12 18:43:17.470516 ignition[817]: Ignition 2.22.0 Dec 12 18:43:17.470534 ignition[817]: Stage: disks Dec 12 18:43:17.473375 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:43:17.470768 ignition[817]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:17.478351 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:43:17.470786 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:17.479868 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:43:17.472054 ignition[817]: disks: disks passed Dec 12 18:43:17.483872 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:43:17.472122 ignition[817]: Ignition finished successfully Dec 12 18:43:17.487839 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:43:17.491845 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:43:17.498244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:43:17.539232 systemd-fsck[826]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 12 18:43:17.549012 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:43:17.555078 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:43:17.723758 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:43:17.723657 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:43:17.727693 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:43:17.730847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:43:17.747695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:43:17.749629 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:43:17.749856 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:43:17.749910 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:43:17.765771 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (834) Dec 12 18:43:17.765799 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:43:17.765814 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:43:17.771481 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:43:17.775303 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:43:17.775353 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:43:17.775378 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:43:17.777833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:43:17.785486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:43:17.889896 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:43:17.897509 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:43:17.904078 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:43:17.910016 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:43:18.045155 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:43:18.051193 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:43:18.055685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:43:18.071523 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:43:18.076717 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:43:18.112307 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:43:18.118340 ignition[946]: INFO : Ignition 2.22.0 Dec 12 18:43:18.118340 ignition[946]: INFO : Stage: mount Dec 12 18:43:18.122702 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:18.122702 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:18.122702 ignition[946]: INFO : mount: mount passed Dec 12 18:43:18.122702 ignition[946]: INFO : Ignition finished successfully Dec 12 18:43:18.122802 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:43:18.128811 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:43:18.153046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:43:18.179658 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (959) Dec 12 18:43:18.182284 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:43:18.182328 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:43:18.188367 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:43:18.188415 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:43:18.188440 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:43:18.191687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:43:18.229187 ignition[975]: INFO : Ignition 2.22.0 Dec 12 18:43:18.229187 ignition[975]: INFO : Stage: files Dec 12 18:43:18.234706 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:18.234706 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:18.234706 ignition[975]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:43:18.234706 ignition[975]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:43:18.234706 ignition[975]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:43:18.250654 ignition[975]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:43:18.250654 ignition[975]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:43:18.250654 ignition[975]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:43:18.250654 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:43:18.250654 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:43:18.237823 unknown[975]: wrote ssh authorized keys file for user: core Dec 12 18:43:18.358420 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:43:18.503434 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:43:18.507771 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:43:18.544674 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:43:18.544674 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:43:18.544674 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:43:18.544674 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:43:18.544674 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 12 18:43:18.555992 systemd-networkd[795]: eth0: Gained IPv6LL Dec 12 18:43:18.842831 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:43:19.364854 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:43:19.364854 ignition[975]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:43:19.372740 ignition[975]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:43:19.372740 ignition[975]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:43:19.372740 ignition[975]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:43:19.372740 ignition[975]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:43:19.372740 ignition[975]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:43:19.372740 ignition[975]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:43:19.372740 ignition[975]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:43:19.372740 ignition[975]: INFO : files: files passed Dec 12 18:43:19.372740 ignition[975]: INFO : Ignition finished successfully Dec 12 18:43:19.374343 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:43:19.380532 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:43:19.387700 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:43:19.429848 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:43:19.429848 initrd-setup-root-after-ignition[1005]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:43:19.403853 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:43:19.448728 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:43:19.404022 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:43:19.422325 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:43:19.427398 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:43:19.435325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:43:19.503961 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:43:19.504117 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:43:19.509314 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:43:19.511899 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:43:19.517057 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:43:19.519122 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:43:19.548462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:43:19.551307 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:43:19.579446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:43:19.582849 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:43:19.588926 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:43:19.592145 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:43:19.592721 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:43:19.601845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:43:19.604991 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:43:19.608146 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:43:19.612106 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:43:19.616137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:43:19.620102 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:43:19.624107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:43:19.628125 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:43:19.632131 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:43:19.637135 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:43:19.641115 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:43:19.645029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:43:19.645339 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:43:19.655731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:43:19.656215 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:43:19.661044 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:43:19.661340 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:43:19.665062 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:43:19.665469 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:43:19.674146 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:43:19.674669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:43:19.677320 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:43:19.677767 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:43:19.683475 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:43:19.690700 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:43:19.691406 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:43:19.701481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:43:19.712747 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:43:19.712972 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:43:19.716904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:43:19.717104 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:43:19.735977 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:43:19.736133 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:43:19.744507 ignition[1030]: INFO : Ignition 2.22.0 Dec 12 18:43:19.744507 ignition[1030]: INFO : Stage: umount Dec 12 18:43:19.744507 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:43:19.744507 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 12 18:43:19.744507 ignition[1030]: INFO : umount: umount passed Dec 12 18:43:19.744507 ignition[1030]: INFO : Ignition finished successfully Dec 12 18:43:19.743806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:43:19.745162 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:43:19.745461 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:43:19.748599 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:43:19.748993 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:43:19.756256 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:43:19.756370 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:43:19.758947 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:43:19.759028 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:43:19.765789 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:43:19.765864 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:43:19.770738 systemd[1]: Stopped target network.target - Network. Dec 12 18:43:19.774678 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:43:19.774762 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:43:19.778715 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:43:19.782657 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:43:19.782731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:43:19.786665 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:43:19.790678 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:43:19.794721 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:43:19.794804 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:43:19.797912 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:43:19.798092 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:43:19.801916 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:43:19.802093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:43:19.805976 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:43:19.806164 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:43:19.809896 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:43:19.810100 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:43:19.814452 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:43:19.818224 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:43:19.821898 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:43:19.822025 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:43:19.827523 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:43:19.827972 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:43:19.828130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:43:19.834827 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:43:19.835470 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:43:19.839756 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:43:19.839820 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:43:19.845070 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:43:19.855671 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:43:19.855762 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:43:19.858744 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:43:19.858815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:43:19.862917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:43:19.863110 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:43:19.864156 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:43:19.864318 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:43:19.875125 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:43:19.882574 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:43:19.884490 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:43:19.890399 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:43:19.891625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:43:19.898246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:43:19.898316 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:43:19.905891 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:43:19.905959 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:43:19.908884 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:43:19.909073 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:43:19.917853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:43:19.918056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:43:19.929669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:43:19.929921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:43:19.938394 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:43:19.947663 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:43:19.947748 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:43:19.950970 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:43:19.951141 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:43:19.960863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:43:19.961015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:43:19.968051 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:43:19.968109 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:43:19.968152 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:43:19.968640 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:43:19.968798 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:43:19.973125 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:43:20.051717 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 12 18:43:19.973257 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:43:19.980971 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:43:19.984977 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:43:20.014419 systemd[1]: Switching root. Dec 12 18:43:20.061660 systemd-journald[192]: Journal stopped Dec 12 18:43:22.076548 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:43:22.079312 kernel: SELinux: policy capability open_perms=1 Dec 12 18:43:22.079354 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:43:22.080068 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:43:22.080097 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:43:22.080119 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:43:22.080144 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:43:22.080167 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:43:22.080196 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:43:22.080219 kernel: audit: type=1403 audit(1765565000.639:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:43:22.080483 systemd[1]: Successfully loaded SELinux policy in 69.577ms. Dec 12 18:43:22.080515 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.175ms. Dec 12 18:43:22.080543 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:43:22.080568 systemd[1]: Detected virtualization google. Dec 12 18:43:22.081471 systemd[1]: Detected architecture x86-64. Dec 12 18:43:22.081509 systemd[1]: Detected first boot. Dec 12 18:43:22.081536 systemd[1]: Initializing machine ID from random generator. Dec 12 18:43:22.081562 zram_generator::config[1075]: No configuration found. Dec 12 18:43:22.081658 kernel: Guest personality initialized and is inactive Dec 12 18:43:22.081684 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:43:22.081717 kernel: Initialized host personality Dec 12 18:43:22.081740 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:43:22.081766 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:43:22.081794 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:43:22.081820 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:43:22.081845 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:43:22.081870 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:43:22.081902 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:43:22.081928 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:43:22.081954 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:43:22.081980 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:43:22.082006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:43:22.082031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:43:22.082066 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:43:22.082097 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:43:22.085656 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:43:22.085691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:43:22.085718 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:43:22.085744 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:43:22.085770 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:43:22.086508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:43:22.086712 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:43:22.086742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:43:22.086776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:43:22.086801 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:43:22.086826 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:43:22.086851 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:43:22.086877 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:43:22.086903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:43:22.086928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:43:22.086960 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:43:22.086985 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:43:22.087011 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:43:22.087037 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:43:22.087063 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:43:22.087088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:43:22.092630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:43:22.095028 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:43:22.095064 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:43:22.095092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:43:22.095118 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:43:22.095144 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:43:22.095171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:22.095206 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:43:22.095243 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:43:22.095270 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:43:22.095298 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:43:22.095324 systemd[1]: Reached target machines.target - Containers. Dec 12 18:43:22.095349 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:43:22.095376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:22.095402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:43:22.097842 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:43:22.097882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:43:22.097918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:43:22.097945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:43:22.097971 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:43:22.097998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:43:22.098024 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:43:22.098050 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:43:22.098077 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:43:22.098112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:43:22.098138 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:43:22.098166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:22.098193 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:43:22.098219 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:43:22.098245 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:43:22.098272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:43:22.098298 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:43:22.098330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:43:22.098356 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:43:22.098383 systemd[1]: Stopped verity-setup.service. Dec 12 18:43:22.098417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:22.098443 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:43:22.098468 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:43:22.098494 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:43:22.098519 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:43:22.098552 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:43:22.098578 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:43:22.101737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:43:22.101765 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:43:22.101792 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:43:22.101818 kernel: ACPI: bus type drm_connector registered Dec 12 18:43:22.101843 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:43:22.101870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:43:22.101897 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:43:22.101932 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:43:22.101958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:43:22.101983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:43:22.102010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:43:22.102035 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:43:22.102060 kernel: fuse: init (API version 7.41) Dec 12 18:43:22.102084 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:43:22.103662 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:43:22.103706 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:43:22.103743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:43:22.103771 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:43:22.103799 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:43:22.103827 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:43:22.103863 kernel: loop: module loaded Dec 12 18:43:22.103894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:43:22.104701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:22.104747 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:43:22.104776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:43:22.104804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:43:22.104885 systemd-journald[1149]: Collecting audit messages is disabled. Dec 12 18:43:22.104942 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:43:22.104970 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:43:22.104997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:43:22.105025 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:43:22.105052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:43:22.105079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:43:22.105106 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:43:22.105138 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:43:22.106321 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:43:22.106360 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:43:22.106389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:43:22.106416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:43:22.106453 systemd-journald[1149]: Journal started Dec 12 18:43:22.106506 systemd-journald[1149]: Runtime Journal (/run/log/journal/08e5d68e59cd4010ad62eaf5eec64684) is 8M, max 148.6M, 140.6M free. Dec 12 18:43:22.110159 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:43:21.481817 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:43:21.501293 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:43:21.501883 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:43:22.121772 kernel: loop0: detected capacity change from 0 to 219144 Dec 12 18:43:22.123327 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:43:22.133697 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:43:22.186237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:43:22.196771 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:43:22.203865 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:43:22.245006 systemd-journald[1149]: Time spent on flushing to /var/log/journal/08e5d68e59cd4010ad62eaf5eec64684 is 94.445ms for 964 entries. Dec 12 18:43:22.245006 systemd-journald[1149]: System Journal (/var/log/journal/08e5d68e59cd4010ad62eaf5eec64684) is 8M, max 584.8M, 576.8M free. Dec 12 18:43:22.352942 systemd-journald[1149]: Received client request to flush runtime journal. Dec 12 18:43:22.353022 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:43:22.353052 kernel: loop1: detected capacity change from 0 to 50736 Dec 12 18:43:22.277561 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:43:22.335681 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:43:22.340671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:43:22.356644 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:43:22.369800 kernel: loop2: detected capacity change from 0 to 128560 Dec 12 18:43:22.430399 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:43:22.464348 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 12 18:43:22.465932 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 12 18:43:22.475489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:43:22.512288 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:43:22.514183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:43:22.516646 kernel: loop4: detected capacity change from 0 to 219144 Dec 12 18:43:22.552631 kernel: loop5: detected capacity change from 0 to 50736 Dec 12 18:43:22.578637 kernel: loop6: detected capacity change from 0 to 128560 Dec 12 18:43:22.617633 kernel: loop7: detected capacity change from 0 to 110984 Dec 12 18:43:22.646043 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 12 18:43:22.649869 (sd-merge)[1222]: Merged extensions into '/usr'. Dec 12 18:43:22.661099 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:43:22.661237 systemd[1]: Reloading... Dec 12 18:43:22.844843 zram_generator::config[1246]: No configuration found. Dec 12 18:43:23.209611 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:43:23.385418 systemd[1]: Reloading finished in 722 ms. Dec 12 18:43:23.414613 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:43:23.419318 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:43:23.435344 systemd[1]: Starting ensure-sysext.service... Dec 12 18:43:23.442809 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:43:23.485704 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:43:23.485877 systemd[1]: Reloading... Dec 12 18:43:23.493949 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:43:23.495035 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:43:23.495675 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:43:23.496315 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:43:23.498233 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:43:23.499054 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 12 18:43:23.499875 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 12 18:43:23.510862 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:43:23.510990 systemd-tmpfiles[1289]: Skipping /boot Dec 12 18:43:23.528665 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:43:23.528790 systemd-tmpfiles[1289]: Skipping /boot Dec 12 18:43:23.593628 zram_generator::config[1313]: No configuration found. Dec 12 18:43:23.830935 systemd[1]: Reloading finished in 344 ms. Dec 12 18:43:23.856038 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:43:23.876232 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:43:23.888903 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:43:23.895296 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:43:23.901762 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:43:23.910500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:43:23.919711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:43:23.926628 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:43:23.940033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:23.940372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:23.944032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:43:23.949896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:43:23.960706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:43:23.963807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:23.964172 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:23.968948 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:43:23.970242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:23.981017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:23.981470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:23.982234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:23.982803 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:23.982973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:23.985055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:43:23.992080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:43:23.998525 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:43:24.000986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:43:24.005515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:43:24.006697 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:43:24.014868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:43:24.015129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:43:24.024032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:24.025764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:43:24.031694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:43:24.043882 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:43:24.049751 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:43:24.062235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:43:24.068181 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 12 18:43:24.071052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:43:24.071411 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:43:24.071928 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:43:24.074952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:43:24.080398 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:43:24.081529 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Dec 12 18:43:24.100117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:43:24.100660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:43:24.104969 systemd[1]: Finished ensure-sysext.service. Dec 12 18:43:24.124488 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:43:24.128696 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:43:24.138012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:43:24.139687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:43:24.143276 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:43:24.144039 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:43:24.148295 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:43:24.148661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:43:24.151503 augenrules[1400]: No rules Dec 12 18:43:24.162331 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:43:24.163779 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:43:24.176708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:43:24.176947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:43:24.177199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:43:24.181741 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 12 18:43:24.188394 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 12 18:43:24.196174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:43:24.198885 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:43:24.206031 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:43:24.209769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:43:24.218418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:43:24.288926 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 12 18:43:24.422437 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Dec 12 18:43:24.422491 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Dec 12 18:43:24.501479 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:43:24.510694 systemd-networkd[1422]: lo: Link UP Dec 12 18:43:24.510709 systemd-networkd[1422]: lo: Gained carrier Dec 12 18:43:24.517190 systemd-networkd[1422]: Enumeration completed Dec 12 18:43:24.517316 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:43:24.523847 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:43:24.525836 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:24.525949 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:43:24.529217 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:24.531783 systemd-networkd[1422]: eth0: Link UP Dec 12 18:43:24.532178 systemd-networkd[1422]: eth0: Gained carrier Dec 12 18:43:24.532206 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:43:24.532919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:43:24.544663 systemd-networkd[1422]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 12 18:43:24.574727 systemd-resolved[1361]: Positive Trust Anchors: Dec 12 18:43:24.574746 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:43:24.574824 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:43:24.596914 systemd-resolved[1361]: Defaulting to hostname 'linux'. Dec 12 18:43:24.602111 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:43:24.604818 systemd[1]: Reached target network.target - Network. Dec 12 18:43:24.608139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:43:24.611723 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:43:24.614816 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:43:24.618717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:43:24.621701 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:43:24.624933 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:43:24.627849 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:43:24.630724 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:43:24.633841 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:43:24.633891 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:43:24.636747 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:43:24.643197 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:43:24.649771 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:43:24.659426 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:43:24.662984 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:43:24.666819 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:43:24.677814 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:43:24.681997 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:43:24.681913 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:43:24.687858 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:43:24.690026 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:43:24.695015 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:43:24.698028 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:43:24.700846 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:43:24.700906 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:43:24.704993 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:43:24.711195 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:43:24.717236 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:43:24.724435 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:43:24.731337 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:43:24.737902 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:43:24.740714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:43:24.744716 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:43:24.753411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:43:24.762426 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:43:24.775684 jq[1478]: false Dec 12 18:43:24.774818 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:43:24.786877 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:43:24.817692 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 12 18:43:24.818075 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 12 18:43:24.815134 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:43:24.821296 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:43:24.821355 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 12 18:43:24.821386 kernel: ACPI: button: Sleep Button [SLPF] Dec 12 18:43:24.864616 google_oslogin_nss_cache[1480]: oslogin_cache_refresh[1480]: Refreshing passwd entry cache Dec 12 18:43:24.875641 oslogin_cache_refresh[1480]: Refreshing passwd entry cache Dec 12 18:43:24.879895 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:43:24.885122 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 12 18:43:24.886936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:43:24.890219 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:43:24.896838 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:43:24.912513 google_oslogin_nss_cache[1480]: oslogin_cache_refresh[1480]: Failure getting users, quitting Dec 12 18:43:24.912513 google_oslogin_nss_cache[1480]: oslogin_cache_refresh[1480]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:43:24.912513 google_oslogin_nss_cache[1480]: oslogin_cache_refresh[1480]: Refreshing group entry cache Dec 12 18:43:24.909845 oslogin_cache_refresh[1480]: Failure getting users, quitting Dec 12 18:43:24.909869 oslogin_cache_refresh[1480]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:43:24.909924 oslogin_cache_refresh[1480]: Refreshing group entry cache Dec 12 18:43:24.916266 google_oslogin_nss_cache[1480]: oslogin_cache_refresh[1480]: Failure getting groups, quitting Dec 12 18:43:24.916266 google_oslogin_nss_cache[1480]: oslogin_cache_refresh[1480]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:43:24.913083 oslogin_cache_refresh[1480]: Failure getting groups, quitting Dec 12 18:43:24.913102 oslogin_cache_refresh[1480]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:43:24.924267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:43:24.928300 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:43:24.929906 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:43:24.930479 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:43:24.931630 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:43:24.939080 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:43:24.939441 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:43:24.962675 coreos-metadata[1475]: Dec 12 18:43:24.953 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 12 18:43:24.971341 coreos-metadata[1475]: Dec 12 18:43:24.970 INFO Fetch successful Dec 12 18:43:24.971341 coreos-metadata[1475]: Dec 12 18:43:24.970 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 12 18:43:24.980900 jq[1495]: true Dec 12 18:43:24.990628 coreos-metadata[1475]: Dec 12 18:43:24.981 INFO Fetch successful Dec 12 18:43:24.990628 coreos-metadata[1475]: Dec 12 18:43:24.981 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 12 18:43:24.990628 coreos-metadata[1475]: Dec 12 18:43:24.981 INFO Fetch successful Dec 12 18:43:24.990628 coreos-metadata[1475]: Dec 12 18:43:24.981 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 12 18:43:24.990628 coreos-metadata[1475]: Dec 12 18:43:24.982 INFO Fetch successful Dec 12 18:43:25.035059 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:43:25.034994 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:43:25.065982 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:43:25.070489 extend-filesystems[1479]: Found /dev/sda6 Dec 12 18:43:25.066316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:43:25.100617 jq[1515]: true Dec 12 18:43:25.119096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:43:25.122614 extend-filesystems[1479]: Found /dev/sda9 Dec 12 18:43:25.151622 extend-filesystems[1479]: Checking size of /dev/sda9 Dec 12 18:43:25.164144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 12 18:43:25.172208 update_engine[1493]: I20251212 18:43:25.172109 1493 main.cc:92] Flatcar Update Engine starting Dec 12 18:43:25.181889 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:43:25.186690 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:43:25.197713 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: ---------------------------------------------------- Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: corporation. Support and training for ntp-4 are Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: available at https://www.nwtime.org/support Dec 12 18:43:25.201524 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: ---------------------------------------------------- Dec 12 18:43:25.198255 ntpd[1482]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:43:25.198328 ntpd[1482]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:43:25.198344 ntpd[1482]: ---------------------------------------------------- Dec 12 18:43:25.198358 ntpd[1482]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:43:25.198371 ntpd[1482]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:43:25.198384 ntpd[1482]: corporation. Support and training for ntp-4 are Dec 12 18:43:25.198399 ntpd[1482]: available at https://www.nwtime.org/support Dec 12 18:43:25.198424 ntpd[1482]: ---------------------------------------------------- Dec 12 18:43:25.214203 ntpd[1482]: proto: precision = 0.110 usec (-23) Dec 12 18:43:25.218727 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: proto: precision = 0.110 usec (-23) Dec 12 18:43:25.235120 kernel: ntpd[1482]: segfault at 24 ip 00005581333a2aeb sp 00007fffcdac2600 error 4 in ntpd[68aeb,558133340000+80000] likely on CPU 0 (core 0, socket 0) Dec 12 18:43:25.235183 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: basedate set to 2025-11-30 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: gps base set to 2025-11-30 (week 2395) Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Listen normally on 3 eth0 10.128.0.44:123 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: Listen normally on 4 lo [::1]:123 Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: bind(21) AF_INET6 [fe80::4001:aff:fe80:2c%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:43:25.235215 ntpd[1482]: 12 Dec 18:43:25 ntpd[1482]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:2c%2]:123 Dec 12 18:43:25.225813 ntpd[1482]: basedate set to 2025-11-30 Dec 12 18:43:25.235695 tar[1504]: linux-amd64/LICENSE Dec 12 18:43:25.235695 tar[1504]: linux-amd64/helm Dec 12 18:43:25.225842 ntpd[1482]: gps base set to 2025-11-30 (week 2395) Dec 12 18:43:25.226003 ntpd[1482]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:43:25.226042 ntpd[1482]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:43:25.226270 ntpd[1482]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:43:25.226305 ntpd[1482]: Listen normally on 3 eth0 10.128.0.44:123 Dec 12 18:43:25.226344 ntpd[1482]: Listen normally on 4 lo [::1]:123 Dec 12 18:43:25.226382 ntpd[1482]: bind(21) AF_INET6 [fe80::4001:aff:fe80:2c%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:43:25.226420 ntpd[1482]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:2c%2]:123 Dec 12 18:43:25.280937 systemd-coredump[1552]: Process 1482 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 12 18:43:25.284068 extend-filesystems[1479]: Resized partition /dev/sda9 Dec 12 18:43:25.285537 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 12 18:43:25.295771 systemd[1]: Started systemd-coredump@0-1552-0.service - Process Core Dump (PID 1552/UID 0). Dec 12 18:43:25.303742 extend-filesystems[1554]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:43:25.328076 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Dec 12 18:43:25.396896 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:43:25.475487 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Dec 12 18:43:25.493417 dbus-daemon[1476]: [system] SELinux support is enabled Dec 12 18:43:25.514230 extend-filesystems[1554]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:43:25.514230 extend-filesystems[1554]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 12 18:43:25.514230 extend-filesystems[1554]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Dec 12 18:43:25.493667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:43:25.523146 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:43:25.523312 extend-filesystems[1479]: Resized filesystem in /dev/sda9 Dec 12 18:43:25.528105 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:43:25.529101 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1422 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:43:25.528497 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:43:25.539954 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 18:43:25.549032 update_engine[1493]: I20251212 18:43:25.548802 1493 update_check_scheduler.cc:74] Next update check in 3m29s Dec 12 18:43:25.665149 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:43:25.676829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:43:25.757642 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:43:25.772390 systemd[1]: Starting sshkeys.service... Dec 12 18:43:25.777816 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:43:25.777863 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:43:25.792745 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:43:25.801760 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:43:25.801801 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:43:25.815837 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:43:25.870963 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:43:25.884675 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:43:26.077102 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:43:26.084035 containerd[1509]: time="2025-12-12T18:43:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:43:26.085429 systemd-logind[1490]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 12 18:43:26.085472 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:43:26.085977 systemd-logind[1490]: New seat seat0. Dec 12 18:43:26.089839 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:43:26.107091 coreos-metadata[1585]: Dec 12 18:43:26.106 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 12 18:43:26.110670 coreos-metadata[1585]: Dec 12 18:43:26.110 INFO Fetch failed with 404: resource not found Dec 12 18:43:26.110670 coreos-metadata[1585]: Dec 12 18:43:26.110 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 12 18:43:26.110670 coreos-metadata[1585]: Dec 12 18:43:26.110 INFO Fetch successful Dec 12 18:43:26.110670 coreos-metadata[1585]: Dec 12 18:43:26.110 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 12 18:43:26.111623 coreos-metadata[1585]: Dec 12 18:43:26.111 INFO Fetch failed with 404: resource not found Dec 12 18:43:26.111623 coreos-metadata[1585]: Dec 12 18:43:26.111 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 12 18:43:26.114447 coreos-metadata[1585]: Dec 12 18:43:26.114 INFO Fetch failed with 404: resource not found Dec 12 18:43:26.114447 coreos-metadata[1585]: Dec 12 18:43:26.114 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 12 18:43:26.116639 coreos-metadata[1585]: Dec 12 18:43:26.115 INFO Fetch successful Dec 12 18:43:26.119061 unknown[1585]: wrote ssh authorized keys file for user: core Dec 12 18:43:26.125542 containerd[1509]: time="2025-12-12T18:43:26.121760403Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:43:26.182263 update-ssh-keys[1591]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:43:26.182071 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:43:26.201712 systemd[1]: Finished sshkeys.service. Dec 12 18:43:26.221052 containerd[1509]: time="2025-12-12T18:43:26.220892470Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.522µs" Dec 12 18:43:26.221052 containerd[1509]: time="2025-12-12T18:43:26.220937872Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:43:26.221052 containerd[1509]: time="2025-12-12T18:43:26.220968767Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:43:26.223426 containerd[1509]: time="2025-12-12T18:43:26.222177314Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:43:26.223426 containerd[1509]: time="2025-12-12T18:43:26.222222035Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:43:26.223426 containerd[1509]: time="2025-12-12T18:43:26.222267465Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:43:26.223426 containerd[1509]: time="2025-12-12T18:43:26.222394367Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:43:26.223426 containerd[1509]: time="2025-12-12T18:43:26.222416934Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:43:26.226352 containerd[1509]: time="2025-12-12T18:43:26.226289210Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:43:26.226352 containerd[1509]: time="2025-12-12T18:43:26.226346578Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:43:26.226473 containerd[1509]: time="2025-12-12T18:43:26.226374033Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:43:26.226473 containerd[1509]: time="2025-12-12T18:43:26.226389932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:43:26.226619 containerd[1509]: time="2025-12-12T18:43:26.226542985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:43:26.231012 containerd[1509]: time="2025-12-12T18:43:26.226918637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:43:26.231012 containerd[1509]: time="2025-12-12T18:43:26.226984541Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:43:26.231012 containerd[1509]: time="2025-12-12T18:43:26.227005224Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:43:26.231012 containerd[1509]: time="2025-12-12T18:43:26.227052490Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:43:26.231012 containerd[1509]: time="2025-12-12T18:43:26.227429599Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:43:26.231012 containerd[1509]: time="2025-12-12T18:43:26.227530330Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:43:26.234380 containerd[1509]: time="2025-12-12T18:43:26.234303219Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:43:26.234380 containerd[1509]: time="2025-12-12T18:43:26.234372158Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:43:26.234523 containerd[1509]: time="2025-12-12T18:43:26.234396054Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:43:26.234523 containerd[1509]: time="2025-12-12T18:43:26.234417806Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:43:26.234523 containerd[1509]: time="2025-12-12T18:43:26.234440023Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:43:26.234523 containerd[1509]: time="2025-12-12T18:43:26.234470273Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:43:26.234523 containerd[1509]: time="2025-12-12T18:43:26.234498156Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:43:26.234523 containerd[1509]: time="2025-12-12T18:43:26.234518640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234535624Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234553622Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234570514Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234618169Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234775385Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234808291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234830576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:43:26.234871 containerd[1509]: time="2025-12-12T18:43:26.234858060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.234878023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.234895957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.234914992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.234943078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.234965032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.234983127Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.235001140Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.235085437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.235105652Z" level=info msg="Start snapshots syncer" Dec 12 18:43:26.235215 containerd[1509]: time="2025-12-12T18:43:26.235167928Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:43:26.241218 containerd[1509]: time="2025-12-12T18:43:26.241021620Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:43:26.241218 containerd[1509]: time="2025-12-12T18:43:26.241114741Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241203532Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241366633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241407434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241429703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241451818Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241474923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241496477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241516569Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:43:26.241780 containerd[1509]: time="2025-12-12T18:43:26.241560739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243730560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243779102Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243838572Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243863032Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243879169Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243896502Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243911784Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243929622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243957452Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243983553Z" level=info msg="runtime interface created" Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.243994643Z" level=info msg="created NRI interface" Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.244008948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.244027620Z" level=info msg="Connect containerd service" Dec 12 18:43:26.244914 containerd[1509]: time="2025-12-12T18:43:26.244059867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:43:26.249598 containerd[1509]: time="2025-12-12T18:43:26.247150127Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:43:26.322624 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:43:26.385701 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:43:26.412693 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:43:26.427869 systemd-networkd[1422]: eth0: Gained IPv6LL Dec 12 18:43:26.430726 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:43:26.438820 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:43:26.450852 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:43:26.469969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:26.482503 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:43:26.496708 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 12 18:43:26.498405 systemd-coredump[1555]: Process 1482 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1482: #0 0x00005581333a2aeb n/a (ntpd + 0x68aeb) #1 0x000055813334bcdf n/a (ntpd + 0x11cdf) #2 0x000055813334c575 n/a (ntpd + 0x12575) #3 0x0000558133347d8a n/a (ntpd + 0xdd8a) #4 0x00005581333495d3 n/a (ntpd + 0xf5d3) #5 0x0000558133351fd1 n/a (ntpd + 0x17fd1) #6 0x0000558133342c2d n/a (ntpd + 0x8c2d) #7 0x00007f704ffce16c n/a (libc.so.6 + 0x2716c) #8 0x00007f704ffce229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000558133342c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 12 18:43:26.504856 systemd[1]: systemd-coredump@0-1552-0.service: Deactivated successfully. Dec 12 18:43:26.516547 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 12 18:43:26.518830 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 12 18:43:26.524470 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:43:26.528171 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:43:26.554231 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:43:26.604956 init.sh[1619]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 12 18:43:26.604956 init.sh[1619]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 12 18:43:26.607577 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:43:26.610944 init.sh[1619]: + /usr/bin/google_instance_setup Dec 12 18:43:26.622563 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 12 18:43:26.629817 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:43:26.641987 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:43:26.654650 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:43:26.665528 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:43:26.674183 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:43:26.684212 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:43:26.688563 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1583 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:43:26.699465 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:43:26.730050 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:43:26.783280 ntpd[1644]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:43:26.783368 ntpd[1644]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: ---------------------------------------------------- Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: corporation. Support and training for ntp-4 are Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: available at https://www.nwtime.org/support Dec 12 18:43:26.783841 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: ---------------------------------------------------- Dec 12 18:43:26.783384 ntpd[1644]: ---------------------------------------------------- Dec 12 18:43:26.784869 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: proto: precision = 0.070 usec (-24) Dec 12 18:43:26.783398 ntpd[1644]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:43:26.783411 ntpd[1644]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:43:26.783424 ntpd[1644]: corporation. Support and training for ntp-4 are Dec 12 18:43:26.783437 ntpd[1644]: available at https://www.nwtime.org/support Dec 12 18:43:26.783450 ntpd[1644]: ---------------------------------------------------- Dec 12 18:43:26.784439 ntpd[1644]: proto: precision = 0.070 usec (-24) Dec 12 18:43:26.787088 containerd[1509]: time="2025-12-12T18:43:26.787027258Z" level=info msg="Start subscribing containerd event" Dec 12 18:43:26.788408 containerd[1509]: time="2025-12-12T18:43:26.787281906Z" level=info msg="Start recovering state" Dec 12 18:43:26.788721 containerd[1509]: time="2025-12-12T18:43:26.788551894Z" level=info msg="Start event monitor" Dec 12 18:43:26.788915 containerd[1509]: time="2025-12-12T18:43:26.788851952Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:43:26.788915 containerd[1509]: time="2025-12-12T18:43:26.788878539Z" level=info msg="Start streaming server" Dec 12 18:43:26.788915 containerd[1509]: time="2025-12-12T18:43:26.789771703Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:43:26.788915 containerd[1509]: time="2025-12-12T18:43:26.789843755Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:43:26.788915 containerd[1509]: time="2025-12-12T18:43:26.789862054Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:43:26.788915 containerd[1509]: time="2025-12-12T18:43:26.789865717Z" level=info msg="runtime interface starting up..." Dec 12 18:43:26.790832 containerd[1509]: time="2025-12-12T18:43:26.790279321Z" level=info msg="starting plugins..." Dec 12 18:43:26.790832 containerd[1509]: time="2025-12-12T18:43:26.790319063Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:43:26.790942 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:43:26.791548 containerd[1509]: time="2025-12-12T18:43:26.791291790Z" level=info msg="containerd successfully booted in 0.718289s" Dec 12 18:43:26.791754 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: basedate set to 2025-11-30 Dec 12 18:43:26.791754 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: gps base set to 2025-11-30 (week 2395) Dec 12 18:43:26.791360 ntpd[1644]: basedate set to 2025-11-30 Dec 12 18:43:26.791961 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:43:26.791961 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:43:26.791386 ntpd[1644]: gps base set to 2025-11-30 (week 2395) Dec 12 18:43:26.791846 ntpd[1644]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:43:26.791899 ntpd[1644]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:43:26.792172 ntpd[1644]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:43:26.792330 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:43:26.792330 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listen normally on 3 eth0 10.128.0.44:123 Dec 12 18:43:26.792330 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listen normally on 4 lo [::1]:123 Dec 12 18:43:26.792218 ntpd[1644]: Listen normally on 3 eth0 10.128.0.44:123 Dec 12 18:43:26.792267 ntpd[1644]: Listen normally on 4 lo [::1]:123 Dec 12 18:43:26.795041 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:2c%2]:123 Dec 12 18:43:26.795041 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: Listening on routing socket on fd #22 for interface updates Dec 12 18:43:26.794578 ntpd[1644]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:2c%2]:123 Dec 12 18:43:26.794678 ntpd[1644]: Listening on routing socket on fd #22 for interface updates Dec 12 18:43:26.804645 ntpd[1644]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:43:26.804694 ntpd[1644]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:43:26.804866 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:43:26.804866 ntpd[1644]: 12 Dec 18:43:26 ntpd[1644]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:43:27.059041 polkitd[1649]: Started polkitd version 126 Dec 12 18:43:27.067019 polkitd[1649]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:43:27.069883 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:43:27.067923 polkitd[1649]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:43:27.067995 polkitd[1649]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:43:27.068615 polkitd[1649]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:43:27.068668 polkitd[1649]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:43:27.068734 polkitd[1649]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:43:27.069479 polkitd[1649]: Finished loading, compiling and executing 2 rules Dec 12 18:43:27.070870 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:43:27.071649 polkitd[1649]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:43:27.105432 systemd-hostnamed[1583]: Hostname set to (transient) Dec 12 18:43:27.106892 systemd-resolved[1361]: System hostname changed to 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal'. Dec 12 18:43:27.163882 tar[1504]: linux-amd64/README.md Dec 12 18:43:27.190749 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:43:27.477706 instance-setup[1637]: INFO Running google_set_multiqueue. Dec 12 18:43:27.499723 instance-setup[1637]: INFO Set channels for eth0 to 2. Dec 12 18:43:27.506399 instance-setup[1637]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 12 18:43:27.507920 instance-setup[1637]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 12 18:43:27.508384 instance-setup[1637]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 12 18:43:27.510466 instance-setup[1637]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 12 18:43:27.511149 instance-setup[1637]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 12 18:43:27.513208 instance-setup[1637]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 12 18:43:27.515365 instance-setup[1637]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 12 18:43:27.517714 instance-setup[1637]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 12 18:43:27.526017 instance-setup[1637]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 12 18:43:27.530480 instance-setup[1637]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 12 18:43:27.532358 instance-setup[1637]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 12 18:43:27.532664 instance-setup[1637]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 12 18:43:27.556179 init.sh[1619]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 12 18:43:27.730017 startup-script[1695]: INFO Starting startup scripts. Dec 12 18:43:27.735423 startup-script[1695]: INFO No startup scripts found in metadata. Dec 12 18:43:27.735576 startup-script[1695]: INFO Finished running startup scripts. Dec 12 18:43:27.757212 init.sh[1619]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 12 18:43:27.759004 init.sh[1619]: + daemon_pids=() Dec 12 18:43:27.759004 init.sh[1619]: + for d in accounts clock_skew network Dec 12 18:43:27.759004 init.sh[1619]: + daemon_pids+=($!) Dec 12 18:43:27.759004 init.sh[1619]: + for d in accounts clock_skew network Dec 12 18:43:27.759004 init.sh[1619]: + daemon_pids+=($!) Dec 12 18:43:27.759004 init.sh[1619]: + for d in accounts clock_skew network Dec 12 18:43:27.759004 init.sh[1619]: + daemon_pids+=($!) Dec 12 18:43:27.759004 init.sh[1619]: + NOTIFY_SOCKET=/run/systemd/notify Dec 12 18:43:27.759004 init.sh[1619]: + /usr/bin/systemd-notify --ready Dec 12 18:43:27.759403 init.sh[1700]: + /usr/bin/google_network_daemon Dec 12 18:43:27.759959 init.sh[1698]: + /usr/bin/google_accounts_daemon Dec 12 18:43:27.760555 init.sh[1699]: + /usr/bin/google_clock_skew_daemon Dec 12 18:43:27.778137 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 12 18:43:27.791376 init.sh[1619]: + wait -n 1698 1699 1700 Dec 12 18:43:28.112107 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:43:28.125315 systemd[1]: Started sshd@0-10.128.0.44:22-147.75.109.163:40724.service - OpenSSH per-connection server daemon (147.75.109.163:40724). Dec 12 18:43:28.135658 google-clock-skew[1699]: INFO Starting Google Clock Skew daemon. Dec 12 18:43:28.144206 google-clock-skew[1699]: INFO Clock drift token has changed: 0. Dec 12 18:43:28.170210 google-networking[1700]: INFO Starting Google Networking daemon. Dec 12 18:43:28.251294 groupadd[1714]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 12 18:43:28.257162 groupadd[1714]: group added to /etc/gshadow: name=google-sudoers Dec 12 18:43:28.308108 groupadd[1714]: new group: name=google-sudoers, GID=1000 Dec 12 18:43:28.328747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:28.340150 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:43:28.344094 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:43:28.349872 systemd[1]: Startup finished in 4.013s (kernel) + 6.842s (initrd) + 7.776s (userspace) = 18.633s. Dec 12 18:43:28.360254 google-accounts[1698]: INFO Starting Google Accounts daemon. Dec 12 18:43:28.380658 google-accounts[1698]: WARNING OS Login not installed. Dec 12 18:43:28.383679 google-accounts[1698]: INFO Creating a new user account for 0. Dec 12 18:43:28.391690 init.sh[1731]: useradd: invalid user name '0': use --badname to ignore Dec 12 18:43:28.392790 google-accounts[1698]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 12 18:43:28.504331 sshd[1709]: Accepted publickey for core from 147.75.109.163 port 40724 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:28.507349 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:28.521194 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:43:28.523737 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:43:28.544659 systemd-logind[1490]: New session 1 of user core. Dec 12 18:43:28.556501 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:43:28.561926 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:43:28.589567 (systemd)[1740]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:43:28.594277 systemd-logind[1490]: New session c1 of user core. Dec 12 18:43:28.842448 systemd[1740]: Queued start job for default target default.target. Dec 12 18:43:28.851097 systemd[1740]: Created slice app.slice - User Application Slice. Dec 12 18:43:28.851141 systemd[1740]: Reached target paths.target - Paths. Dec 12 18:43:28.851967 systemd[1740]: Reached target timers.target - Timers. Dec 12 18:43:28.853505 systemd[1740]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:43:28.878858 systemd[1740]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:43:28.879025 systemd[1740]: Reached target sockets.target - Sockets. Dec 12 18:43:28.879213 systemd[1740]: Reached target basic.target - Basic System. Dec 12 18:43:28.879342 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:43:28.880985 systemd[1740]: Reached target default.target - Main User Target. Dec 12 18:43:28.881049 systemd[1740]: Startup finished in 275ms. Dec 12 18:43:28.886796 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:43:29.000077 systemd-resolved[1361]: Clock change detected. Flushing caches. Dec 12 18:43:29.001720 google-clock-skew[1699]: INFO Synced system time with hardware clock. Dec 12 18:43:29.197915 systemd[1]: Started sshd@1-10.128.0.44:22-147.75.109.163:40732.service - OpenSSH per-connection server daemon (147.75.109.163:40732). Dec 12 18:43:29.260196 kubelet[1726]: E1212 18:43:29.260127 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:43:29.262984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:43:29.263231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:43:29.263787 systemd[1]: kubelet.service: Consumed 1.185s CPU time, 257.8M memory peak. Dec 12 18:43:29.511567 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 40732 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:29.513463 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:29.521981 systemd-logind[1490]: New session 2 of user core. Dec 12 18:43:29.528606 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:43:29.728796 sshd[1756]: Connection closed by 147.75.109.163 port 40732 Dec 12 18:43:29.729701 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:29.735907 systemd[1]: sshd@1-10.128.0.44:22-147.75.109.163:40732.service: Deactivated successfully. Dec 12 18:43:29.738575 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:43:29.739938 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:43:29.741978 systemd-logind[1490]: Removed session 2. Dec 12 18:43:29.780166 systemd[1]: Started sshd@2-10.128.0.44:22-147.75.109.163:40734.service - OpenSSH per-connection server daemon (147.75.109.163:40734). Dec 12 18:43:30.094822 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 40734 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:30.096739 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:30.104470 systemd-logind[1490]: New session 3 of user core. Dec 12 18:43:30.111608 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:43:30.301976 sshd[1765]: Connection closed by 147.75.109.163 port 40734 Dec 12 18:43:30.302803 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:30.308582 systemd[1]: sshd@2-10.128.0.44:22-147.75.109.163:40734.service: Deactivated successfully. Dec 12 18:43:30.310758 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:43:30.312009 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:43:30.314071 systemd-logind[1490]: Removed session 3. Dec 12 18:43:30.358613 systemd[1]: Started sshd@3-10.128.0.44:22-147.75.109.163:40750.service - OpenSSH per-connection server daemon (147.75.109.163:40750). Dec 12 18:43:30.661076 sshd[1772]: Accepted publickey for core from 147.75.109.163 port 40750 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:30.662668 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:30.669882 systemd-logind[1490]: New session 4 of user core. Dec 12 18:43:30.676593 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:43:30.872341 sshd[1775]: Connection closed by 147.75.109.163 port 40750 Dec 12 18:43:30.873142 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:30.878756 systemd[1]: sshd@3-10.128.0.44:22-147.75.109.163:40750.service: Deactivated successfully. Dec 12 18:43:30.881088 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:43:30.882466 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:43:30.884390 systemd-logind[1490]: Removed session 4. Dec 12 18:43:30.922991 systemd[1]: Started sshd@4-10.128.0.44:22-147.75.109.163:40756.service - OpenSSH per-connection server daemon (147.75.109.163:40756). Dec 12 18:43:31.227594 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 40756 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:31.229238 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:31.234993 systemd-logind[1490]: New session 5 of user core. Dec 12 18:43:31.248586 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:43:31.418601 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:43:31.419094 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:31.440624 sudo[1785]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:31.483293 sshd[1784]: Connection closed by 147.75.109.163 port 40756 Dec 12 18:43:31.483973 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:31.490904 systemd[1]: sshd@4-10.128.0.44:22-147.75.109.163:40756.service: Deactivated successfully. Dec 12 18:43:31.493313 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:43:31.494640 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:43:31.496685 systemd-logind[1490]: Removed session 5. Dec 12 18:43:31.541384 systemd[1]: Started sshd@5-10.128.0.44:22-147.75.109.163:40772.service - OpenSSH per-connection server daemon (147.75.109.163:40772). Dec 12 18:43:31.850009 sshd[1791]: Accepted publickey for core from 147.75.109.163 port 40772 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:31.851888 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:31.859472 systemd-logind[1490]: New session 6 of user core. Dec 12 18:43:31.869621 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:43:32.029099 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:43:32.029604 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:32.041996 sudo[1796]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:32.055066 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:43:32.055539 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:32.068121 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:43:32.123473 augenrules[1818]: No rules Dec 12 18:43:32.124733 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:43:32.125084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:43:32.126291 sudo[1795]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:32.169566 sshd[1794]: Connection closed by 147.75.109.163 port 40772 Dec 12 18:43:32.170342 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:32.176470 systemd[1]: sshd@5-10.128.0.44:22-147.75.109.163:40772.service: Deactivated successfully. Dec 12 18:43:32.178633 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:43:32.180124 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:43:32.182073 systemd-logind[1490]: Removed session 6. Dec 12 18:43:32.224221 systemd[1]: Started sshd@6-10.128.0.44:22-147.75.109.163:33814.service - OpenSSH per-connection server daemon (147.75.109.163:33814). Dec 12 18:43:32.530893 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 33814 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:43:32.532749 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:43:32.540149 systemd-logind[1490]: New session 7 of user core. Dec 12 18:43:32.545593 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:43:32.710989 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:43:32.711481 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:43:33.185272 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:43:33.203948 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:43:33.550592 dockerd[1850]: time="2025-12-12T18:43:33.550389248Z" level=info msg="Starting up" Dec 12 18:43:33.551696 dockerd[1850]: time="2025-12-12T18:43:33.551656045Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:43:33.566372 dockerd[1850]: time="2025-12-12T18:43:33.566326134Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:43:33.709367 dockerd[1850]: time="2025-12-12T18:43:33.709318878Z" level=info msg="Loading containers: start." Dec 12 18:43:33.732430 kernel: Initializing XFRM netlink socket Dec 12 18:43:34.061446 systemd-networkd[1422]: docker0: Link UP Dec 12 18:43:34.066911 dockerd[1850]: time="2025-12-12T18:43:34.066858101Z" level=info msg="Loading containers: done." Dec 12 18:43:34.085108 dockerd[1850]: time="2025-12-12T18:43:34.085046506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:43:34.085316 dockerd[1850]: time="2025-12-12T18:43:34.085137508Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:43:34.085316 dockerd[1850]: time="2025-12-12T18:43:34.085239634Z" level=info msg="Initializing buildkit" Dec 12 18:43:34.114642 dockerd[1850]: time="2025-12-12T18:43:34.113522511Z" level=info msg="Completed buildkit initialization" Dec 12 18:43:34.122363 dockerd[1850]: time="2025-12-12T18:43:34.122291662Z" level=info msg="Daemon has completed initialization" Dec 12 18:43:34.122508 dockerd[1850]: time="2025-12-12T18:43:34.122365988Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:43:34.122814 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:43:34.958118 containerd[1509]: time="2025-12-12T18:43:34.958046368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 18:43:35.374350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652396194.mount: Deactivated successfully. Dec 12 18:43:36.795467 containerd[1509]: time="2025-12-12T18:43:36.795389263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:36.796910 containerd[1509]: time="2025-12-12T18:43:36.796858075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27075656" Dec 12 18:43:36.798440 containerd[1509]: time="2025-12-12T18:43:36.797850317Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:36.800994 containerd[1509]: time="2025-12-12T18:43:36.800949983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:36.802347 containerd[1509]: time="2025-12-12T18:43:36.802304670Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.844207149s" Dec 12 18:43:36.802524 containerd[1509]: time="2025-12-12T18:43:36.802498859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 12 18:43:36.803752 containerd[1509]: time="2025-12-12T18:43:36.803726360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 18:43:38.009988 containerd[1509]: time="2025-12-12T18:43:38.009923591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:38.011418 containerd[1509]: time="2025-12-12T18:43:38.011293857Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21164374" Dec 12 18:43:38.012488 containerd[1509]: time="2025-12-12T18:43:38.012453700Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:38.015507 containerd[1509]: time="2025-12-12T18:43:38.015446644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:38.017008 containerd[1509]: time="2025-12-12T18:43:38.016796934Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.212895158s" Dec 12 18:43:38.017008 containerd[1509]: time="2025-12-12T18:43:38.016841673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 12 18:43:38.017758 containerd[1509]: time="2025-12-12T18:43:38.017560599Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 18:43:38.958415 containerd[1509]: time="2025-12-12T18:43:38.958330000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:38.959686 containerd[1509]: time="2025-12-12T18:43:38.959625767Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15727843" Dec 12 18:43:38.960549 containerd[1509]: time="2025-12-12T18:43:38.960482849Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:38.963691 containerd[1509]: time="2025-12-12T18:43:38.963633586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:38.965216 containerd[1509]: time="2025-12-12T18:43:38.964993707Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 947.019055ms" Dec 12 18:43:38.965216 containerd[1509]: time="2025-12-12T18:43:38.965036359Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 12 18:43:38.966008 containerd[1509]: time="2025-12-12T18:43:38.965975531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 18:43:39.472930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:43:39.477659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:39.787262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:39.803507 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:43:39.897595 kubelet[2139]: E1212 18:43:39.897409 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:43:39.903128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:43:39.903351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:43:39.904121 systemd[1]: kubelet.service: Consumed 250ms CPU time, 109.7M memory peak. Dec 12 18:43:40.175373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333562241.mount: Deactivated successfully. Dec 12 18:43:40.754345 containerd[1509]: time="2025-12-12T18:43:40.754274787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:40.755820 containerd[1509]: time="2025-12-12T18:43:40.755602039Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25967188" Dec 12 18:43:40.756795 containerd[1509]: time="2025-12-12T18:43:40.756740858Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:40.759209 containerd[1509]: time="2025-12-12T18:43:40.759167629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:40.760151 containerd[1509]: time="2025-12-12T18:43:40.760109735Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.794086865s" Dec 12 18:43:40.760297 containerd[1509]: time="2025-12-12T18:43:40.760271773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 12 18:43:40.760936 containerd[1509]: time="2025-12-12T18:43:40.760890618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 18:43:41.081613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492821072.mount: Deactivated successfully. Dec 12 18:43:42.303821 containerd[1509]: time="2025-12-12T18:43:42.303745623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.305289 containerd[1509]: time="2025-12-12T18:43:42.305233381Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22394649" Dec 12 18:43:42.306425 containerd[1509]: time="2025-12-12T18:43:42.306359370Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.310433 containerd[1509]: time="2025-12-12T18:43:42.310333617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.311939 containerd[1509]: time="2025-12-12T18:43:42.311778122Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.550668399s" Dec 12 18:43:42.311939 containerd[1509]: time="2025-12-12T18:43:42.311823793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 12 18:43:42.312596 containerd[1509]: time="2025-12-12T18:43:42.312541659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 18:43:42.686314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592266621.mount: Deactivated successfully. Dec 12 18:43:42.690673 containerd[1509]: time="2025-12-12T18:43:42.690611930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.692056 containerd[1509]: time="2025-12-12T18:43:42.691811651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=322152" Dec 12 18:43:42.693151 containerd[1509]: time="2025-12-12T18:43:42.693111494Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.695825 containerd[1509]: time="2025-12-12T18:43:42.695787000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:42.696856 containerd[1509]: time="2025-12-12T18:43:42.696821005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 384.087161ms" Dec 12 18:43:42.697011 containerd[1509]: time="2025-12-12T18:43:42.696986959Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 12 18:43:42.697673 containerd[1509]: time="2025-12-12T18:43:42.697618638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 18:43:43.050764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944828799.mount: Deactivated successfully. Dec 12 18:43:46.050421 containerd[1509]: time="2025-12-12T18:43:46.050318327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:46.052132 containerd[1509]: time="2025-12-12T18:43:46.051937401Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74172452" Dec 12 18:43:46.053065 containerd[1509]: time="2025-12-12T18:43:46.053021130Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:46.056584 containerd[1509]: time="2025-12-12T18:43:46.056538487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:46.058258 containerd[1509]: time="2025-12-12T18:43:46.058076025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.360410831s" Dec 12 18:43:46.058258 containerd[1509]: time="2025-12-12T18:43:46.058122148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 12 18:43:49.986393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:43:49.991659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:50.216868 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:43:50.217013 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:43:50.217544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:50.217914 systemd[1]: kubelet.service: Consumed 112ms CPU time, 69.6M memory peak. Dec 12 18:43:50.230114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:50.267215 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Dec 12 18:43:50.267242 systemd[1]: Reloading... Dec 12 18:43:50.453433 zram_generator::config[2337]: No configuration found. Dec 12 18:43:50.767232 systemd[1]: Reloading finished in 499 ms. Dec 12 18:43:50.838978 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:43:50.839134 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:43:50.839620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:50.839695 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.2M memory peak. Dec 12 18:43:50.842169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:51.137108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:51.147859 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:43:51.202460 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:43:51.202460 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:51.202968 kubelet[2388]: I1212 18:43:51.202529 2388 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:43:51.956716 kubelet[2388]: I1212 18:43:51.956659 2388 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:43:51.956716 kubelet[2388]: I1212 18:43:51.956695 2388 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:43:51.959617 kubelet[2388]: I1212 18:43:51.959581 2388 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:43:51.959617 kubelet[2388]: I1212 18:43:51.959616 2388 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:43:51.959988 kubelet[2388]: I1212 18:43:51.959947 2388 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:43:51.970437 kubelet[2388]: E1212 18:43:51.969438 2388 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:43:51.972875 kubelet[2388]: I1212 18:43:51.972839 2388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:43:51.978695 kubelet[2388]: I1212 18:43:51.978672 2388 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:43:51.983187 kubelet[2388]: I1212 18:43:51.983161 2388 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:43:51.983686 kubelet[2388]: I1212 18:43:51.983626 2388 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:43:51.983901 kubelet[2388]: I1212 18:43:51.983670 2388 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:43:51.983901 kubelet[2388]: I1212 18:43:51.983897 2388 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:43:51.984116 kubelet[2388]: I1212 18:43:51.983914 2388 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:43:51.984116 kubelet[2388]: I1212 18:43:51.984055 2388 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:43:51.988304 kubelet[2388]: I1212 18:43:51.988238 2388 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:51.988741 kubelet[2388]: I1212 18:43:51.988548 2388 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:43:51.988741 kubelet[2388]: I1212 18:43:51.988591 2388 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:43:51.988741 kubelet[2388]: I1212 18:43:51.988628 2388 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:43:51.988741 kubelet[2388]: I1212 18:43:51.988657 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:43:51.990421 kubelet[2388]: E1212 18:43:51.989487 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:43:51.994088 kubelet[2388]: I1212 18:43:51.994060 2388 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:43:51.994992 kubelet[2388]: I1212 18:43:51.994961 2388 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:43:51.995106 kubelet[2388]: I1212 18:43:51.995015 2388 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:43:51.995106 kubelet[2388]: W1212 18:43:51.995082 2388 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:43:52.001904 kubelet[2388]: E1212 18:43:52.001874 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:43:52.014443 kubelet[2388]: I1212 18:43:52.014118 2388 server.go:1262] "Started kubelet" Dec 12 18:43:52.016086 kubelet[2388]: I1212 18:43:52.015206 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:43:52.022583 kubelet[2388]: E1212 18:43:52.020188 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal.18808c0ad10cd390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,UID:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,},FirstTimestamp:2025-12-12 18:43:52.014074768 +0000 UTC m=+0.860488490,LastTimestamp:2025-12-12 18:43:52.014074768 +0000 UTC m=+0.860488490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,}" Dec 12 18:43:52.024431 kubelet[2388]: I1212 18:43:52.024207 2388 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:43:52.025719 kubelet[2388]: I1212 18:43:52.025698 2388 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:43:52.028534 kubelet[2388]: I1212 18:43:52.025876 2388 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:43:52.029492 kubelet[2388]: I1212 18:43:52.026080 2388 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:43:52.029711 kubelet[2388]: E1212 18:43:52.026113 2388 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" Dec 12 18:43:52.029832 kubelet[2388]: I1212 18:43:52.026126 2388 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:43:52.029997 kubelet[2388]: I1212 18:43:52.029975 2388 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:43:52.030477 kubelet[2388]: I1212 18:43:52.030452 2388 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:43:52.030692 kubelet[2388]: I1212 18:43:52.028046 2388 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:43:52.030962 kubelet[2388]: E1212 18:43:52.030927 2388 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:43:52.031288 kubelet[2388]: E1212 18:43:52.031258 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:43:52.031555 kubelet[2388]: E1212 18:43:52.031519 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="200ms" Dec 12 18:43:52.031911 kubelet[2388]: I1212 18:43:52.031888 2388 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:43:52.034051 kubelet[2388]: I1212 18:43:52.034021 2388 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:43:52.034572 kubelet[2388]: I1212 18:43:52.032782 2388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:43:52.041317 kubelet[2388]: I1212 18:43:52.038207 2388 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:43:52.069304 kubelet[2388]: I1212 18:43:52.069281 2388 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:43:52.069554 kubelet[2388]: I1212 18:43:52.069537 2388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:43:52.069747 kubelet[2388]: I1212 18:43:52.069625 2388 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:52.073991 kubelet[2388]: I1212 18:43:52.073966 2388 policy_none.go:49] "None policy: Start" Dec 12 18:43:52.074133 kubelet[2388]: I1212 18:43:52.074116 2388 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:43:52.074537 kubelet[2388]: I1212 18:43:52.074220 2388 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:43:52.076024 kubelet[2388]: I1212 18:43:52.076002 2388 policy_none.go:47] "Start" Dec 12 18:43:52.079674 kubelet[2388]: I1212 18:43:52.079645 2388 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:43:52.081955 kubelet[2388]: I1212 18:43:52.081919 2388 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:43:52.082429 kubelet[2388]: I1212 18:43:52.082049 2388 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:43:52.082429 kubelet[2388]: I1212 18:43:52.082080 2388 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:43:52.082429 kubelet[2388]: E1212 18:43:52.082127 2388 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:43:52.088849 kubelet[2388]: E1212 18:43:52.088822 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:43:52.094852 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:43:52.115515 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:43:52.120861 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:43:52.129949 kubelet[2388]: E1212 18:43:52.129911 2388 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" Dec 12 18:43:52.133577 kubelet[2388]: E1212 18:43:52.133377 2388 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:43:52.134000 kubelet[2388]: I1212 18:43:52.133896 2388 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:43:52.134082 kubelet[2388]: I1212 18:43:52.134028 2388 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:43:52.134789 kubelet[2388]: I1212 18:43:52.134631 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:43:52.136081 kubelet[2388]: E1212 18:43:52.136042 2388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:43:52.136578 kubelet[2388]: E1212 18:43:52.136548 2388 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" Dec 12 18:43:52.202906 systemd[1]: Created slice kubepods-burstable-pod87d7d48865e1347f5bd31e55fe8d19d3.slice - libcontainer container kubepods-burstable-pod87d7d48865e1347f5bd31e55fe8d19d3.slice. Dec 12 18:43:52.213032 kubelet[2388]: E1212 18:43:52.212480 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.222496 systemd[1]: Created slice kubepods-burstable-pod125e2d11c956e76d31768dd485cc1a8a.slice - libcontainer container kubepods-burstable-pod125e2d11c956e76d31768dd485cc1a8a.slice. Dec 12 18:43:52.232039 kubelet[2388]: I1212 18:43:52.231686 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232039 kubelet[2388]: I1212 18:43:52.231736 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87d7d48865e1347f5bd31e55fe8d19d3-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"87d7d48865e1347f5bd31e55fe8d19d3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232039 kubelet[2388]: I1212 18:43:52.231767 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87d7d48865e1347f5bd31e55fe8d19d3-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"87d7d48865e1347f5bd31e55fe8d19d3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232039 kubelet[2388]: I1212 18:43:52.231817 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232304 kubelet[2388]: I1212 18:43:52.231872 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232304 kubelet[2388]: I1212 18:43:52.231902 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/712e529fb5e367e18914f85459f4de7e-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"712e529fb5e367e18914f85459f4de7e\") " pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232304 kubelet[2388]: I1212 18:43:52.231940 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87d7d48865e1347f5bd31e55fe8d19d3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"87d7d48865e1347f5bd31e55fe8d19d3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232304 kubelet[2388]: I1212 18:43:52.231969 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232530 kubelet[2388]: I1212 18:43:52.232005 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.232530 kubelet[2388]: E1212 18:43:52.232160 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="400ms" Dec 12 18:43:52.235304 kubelet[2388]: E1212 18:43:52.235072 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.238030 kubelet[2388]: I1212 18:43:52.238001 2388 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.238374 kubelet[2388]: E1212 18:43:52.238327 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.239376 systemd[1]: Created slice kubepods-burstable-pod712e529fb5e367e18914f85459f4de7e.slice - libcontainer container kubepods-burstable-pod712e529fb5e367e18914f85459f4de7e.slice. Dec 12 18:43:52.242762 kubelet[2388]: E1212 18:43:52.242718 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.442923 kubelet[2388]: I1212 18:43:52.442872 2388 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.443374 kubelet[2388]: E1212 18:43:52.443338 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.516531 containerd[1509]: time="2025-12-12T18:43:52.516472946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,Uid:87d7d48865e1347f5bd31e55fe8d19d3,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:52.538554 containerd[1509]: time="2025-12-12T18:43:52.538508976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,Uid:125e2d11c956e76d31768dd485cc1a8a,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:52.545349 containerd[1509]: time="2025-12-12T18:43:52.545295134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,Uid:712e529fb5e367e18914f85459f4de7e,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:52.633014 kubelet[2388]: E1212 18:43:52.632951 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="800ms" Dec 12 18:43:52.848337 kubelet[2388]: I1212 18:43:52.848215 2388 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.848782 kubelet[2388]: E1212 18:43:52.848663 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:52.894315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351371469.mount: Deactivated successfully. Dec 12 18:43:52.899891 containerd[1509]: time="2025-12-12T18:43:52.899846459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:52.903943 containerd[1509]: time="2025-12-12T18:43:52.903787663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Dec 12 18:43:52.904952 containerd[1509]: time="2025-12-12T18:43:52.904887842Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:52.905854 containerd[1509]: time="2025-12-12T18:43:52.905805353Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:52.907733 containerd[1509]: time="2025-12-12T18:43:52.907670248Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:52.908939 containerd[1509]: time="2025-12-12T18:43:52.908908349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:43:52.910151 containerd[1509]: time="2025-12-12T18:43:52.910103070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:43:52.912355 containerd[1509]: time="2025-12-12T18:43:52.910992974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:43:52.912355 containerd[1509]: time="2025-12-12T18:43:52.911958273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 393.059714ms" Dec 12 18:43:52.914626 containerd[1509]: time="2025-12-12T18:43:52.914587967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 364.253001ms" Dec 12 18:43:52.939055 kubelet[2388]: E1212 18:43:52.939019 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:43:52.941816 containerd[1509]: time="2025-12-12T18:43:52.941778091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 401.831153ms" Dec 12 18:43:52.961776 containerd[1509]: time="2025-12-12T18:43:52.961737160Z" level=info msg="connecting to shim b6ffc920f2e822704380b1f6fbc880ff153a7ba60a08fd1ed2d9a6b4ad514764" address="unix:///run/containerd/s/7e3da6f149ebf7a4ed3ba4a0df95aed69d9b90a7f259e5a45be1698deaf8a442" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:52.981610 containerd[1509]: time="2025-12-12T18:43:52.981557465Z" level=info msg="connecting to shim 7df48b58a4ac2548e56788d7f3645c5de4941fe6ffa7ad6b0936228e43eaf7da" address="unix:///run/containerd/s/2bb513088abdef19944274f77afd83d2126eba34aab6296c55089d53b1dada11" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:52.995492 containerd[1509]: time="2025-12-12T18:43:52.995452901Z" level=info msg="connecting to shim 96d38970c1abf4c49d06f7c679da9603957ce6758aa5469951c313440fe9521d" address="unix:///run/containerd/s/b90b0b42d549c967f24eaf72c6c85d7f5b93013910df3d21b89ad491ab9e47dd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:53.034631 systemd[1]: Started cri-containerd-b6ffc920f2e822704380b1f6fbc880ff153a7ba60a08fd1ed2d9a6b4ad514764.scope - libcontainer container b6ffc920f2e822704380b1f6fbc880ff153a7ba60a08fd1ed2d9a6b4ad514764. Dec 12 18:43:53.057609 systemd[1]: Started cri-containerd-7df48b58a4ac2548e56788d7f3645c5de4941fe6ffa7ad6b0936228e43eaf7da.scope - libcontainer container 7df48b58a4ac2548e56788d7f3645c5de4941fe6ffa7ad6b0936228e43eaf7da. Dec 12 18:43:53.061443 systemd[1]: Started cri-containerd-96d38970c1abf4c49d06f7c679da9603957ce6758aa5469951c313440fe9521d.scope - libcontainer container 96d38970c1abf4c49d06f7c679da9603957ce6758aa5469951c313440fe9521d. Dec 12 18:43:53.187626 kubelet[2388]: E1212 18:43:53.187485 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:43:53.192231 containerd[1509]: time="2025-12-12T18:43:53.192182662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,Uid:87d7d48865e1347f5bd31e55fe8d19d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6ffc920f2e822704380b1f6fbc880ff153a7ba60a08fd1ed2d9a6b4ad514764\"" Dec 12 18:43:53.195825 kubelet[2388]: E1212 18:43:53.195609 2388 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-21291" Dec 12 18:43:53.202061 containerd[1509]: time="2025-12-12T18:43:53.202009851Z" level=info msg="CreateContainer within sandbox \"b6ffc920f2e822704380b1f6fbc880ff153a7ba60a08fd1ed2d9a6b4ad514764\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:43:53.204269 containerd[1509]: time="2025-12-12T18:43:53.204215161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,Uid:712e529fb5e367e18914f85459f4de7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7df48b58a4ac2548e56788d7f3645c5de4941fe6ffa7ad6b0936228e43eaf7da\"" Dec 12 18:43:53.206272 kubelet[2388]: E1212 18:43:53.206203 2388 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-21291" Dec 12 18:43:53.213081 containerd[1509]: time="2025-12-12T18:43:53.213013198Z" level=info msg="CreateContainer within sandbox \"7df48b58a4ac2548e56788d7f3645c5de4941fe6ffa7ad6b0936228e43eaf7da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:43:53.216327 containerd[1509]: time="2025-12-12T18:43:53.216083357Z" level=info msg="Container 967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:53.218437 containerd[1509]: time="2025-12-12T18:43:53.217389434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,Uid:125e2d11c956e76d31768dd485cc1a8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d38970c1abf4c49d06f7c679da9603957ce6758aa5469951c313440fe9521d\"" Dec 12 18:43:53.221215 kubelet[2388]: E1212 18:43:53.220849 2388 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flat" Dec 12 18:43:53.224645 containerd[1509]: time="2025-12-12T18:43:53.224592165Z" level=info msg="CreateContainer within sandbox \"96d38970c1abf4c49d06f7c679da9603957ce6758aa5469951c313440fe9521d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:43:53.231768 containerd[1509]: time="2025-12-12T18:43:53.231241062Z" level=info msg="CreateContainer within sandbox \"b6ffc920f2e822704380b1f6fbc880ff153a7ba60a08fd1ed2d9a6b4ad514764\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1\"" Dec 12 18:43:53.232502 containerd[1509]: time="2025-12-12T18:43:53.232472320Z" level=info msg="StartContainer for \"967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1\"" Dec 12 18:43:53.233102 containerd[1509]: time="2025-12-12T18:43:53.233069928Z" level=info msg="Container cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:53.234670 containerd[1509]: time="2025-12-12T18:43:53.234633182Z" level=info msg="connecting to shim 967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1" address="unix:///run/containerd/s/7e3da6f149ebf7a4ed3ba4a0df95aed69d9b90a7f259e5a45be1698deaf8a442" protocol=ttrpc version=3 Dec 12 18:43:53.243196 containerd[1509]: time="2025-12-12T18:43:53.243154228Z" level=info msg="Container 482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:53.247685 containerd[1509]: time="2025-12-12T18:43:53.247569005Z" level=info msg="CreateContainer within sandbox \"7df48b58a4ac2548e56788d7f3645c5de4941fe6ffa7ad6b0936228e43eaf7da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19\"" Dec 12 18:43:53.248308 containerd[1509]: time="2025-12-12T18:43:53.248235915Z" level=info msg="StartContainer for \"cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19\"" Dec 12 18:43:53.251630 containerd[1509]: time="2025-12-12T18:43:53.251594242Z" level=info msg="connecting to shim cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19" address="unix:///run/containerd/s/2bb513088abdef19944274f77afd83d2126eba34aab6296c55089d53b1dada11" protocol=ttrpc version=3 Dec 12 18:43:53.257049 containerd[1509]: time="2025-12-12T18:43:53.257010368Z" level=info msg="CreateContainer within sandbox \"96d38970c1abf4c49d06f7c679da9603957ce6758aa5469951c313440fe9521d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1\"" Dec 12 18:43:53.257784 containerd[1509]: time="2025-12-12T18:43:53.257707857Z" level=info msg="StartContainer for \"482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1\"" Dec 12 18:43:53.263788 containerd[1509]: time="2025-12-12T18:43:53.263688692Z" level=info msg="connecting to shim 482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1" address="unix:///run/containerd/s/b90b0b42d549c967f24eaf72c6c85d7f5b93013910df3d21b89ad491ab9e47dd" protocol=ttrpc version=3 Dec 12 18:43:53.280152 systemd[1]: Started cri-containerd-967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1.scope - libcontainer container 967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1. Dec 12 18:43:53.305666 systemd[1]: Started cri-containerd-cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19.scope - libcontainer container cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19. Dec 12 18:43:53.322637 systemd[1]: Started cri-containerd-482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1.scope - libcontainer container 482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1. Dec 12 18:43:53.330622 kubelet[2388]: E1212 18:43:53.330544 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:43:53.347340 kubelet[2388]: E1212 18:43:53.347290 2388 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:43:53.407834 containerd[1509]: time="2025-12-12T18:43:53.407791160Z" level=info msg="StartContainer for \"967a9def480b4ad56d03ed29686ef8b6032a42a99bc40d206d1eae6d4b4fb7d1\" returns successfully" Dec 12 18:43:53.435260 kubelet[2388]: E1212 18:43:53.435173 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="1.6s" Dec 12 18:43:53.499797 containerd[1509]: time="2025-12-12T18:43:53.499739972Z" level=info msg="StartContainer for \"cdd2525ca88d76e827d6c30be6503306b6a557590f471a415849666e9ff08d19\" returns successfully" Dec 12 18:43:53.500831 containerd[1509]: time="2025-12-12T18:43:53.500567377Z" level=info msg="StartContainer for \"482868d85aff96fddbcbe8e937db923dee988b3b797dc8e492f71fe92e9c0bc1\" returns successfully" Dec 12 18:43:53.654785 kubelet[2388]: I1212 18:43:53.654600 2388 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:54.125072 kubelet[2388]: E1212 18:43:54.124654 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:54.131647 kubelet[2388]: E1212 18:43:54.131217 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:54.132376 kubelet[2388]: E1212 18:43:54.132350 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:55.139235 kubelet[2388]: E1212 18:43:55.138756 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:55.141427 kubelet[2388]: E1212 18:43:55.140527 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:55.651969 kubelet[2388]: E1212 18:43:55.651691 2388 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.002894 kubelet[2388]: I1212 18:43:56.002856 2388 apiserver.go:52] "Watching apiserver" Dec 12 18:43:56.188603 kubelet[2388]: E1212 18:43:56.188537 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.228642 kubelet[2388]: I1212 18:43:56.228586 2388 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.228642 kubelet[2388]: E1212 18:43:56.228640 2388 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\": node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" not found" Dec 12 18:43:56.228896 kubelet[2388]: I1212 18:43:56.228767 2388 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:43:56.234471 kubelet[2388]: E1212 18:43:56.234326 2388 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal.18808c0ad10cd390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,UID:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,},FirstTimestamp:2025-12-12 18:43:52.014074768 +0000 UTC m=+0.860488490,LastTimestamp:2025-12-12 18:43:52.014074768 +0000 UTC m=+0.860488490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal,}" Dec 12 18:43:56.326870 kubelet[2388]: I1212 18:43:56.326733 2388 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.349106 kubelet[2388]: E1212 18:43:56.349056 2388 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.349106 kubelet[2388]: I1212 18:43:56.349102 2388 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.353537 kubelet[2388]: E1212 18:43:56.353500 2388 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.353694 kubelet[2388]: I1212 18:43:56.353538 2388 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:56.359043 kubelet[2388]: E1212 18:43:56.358993 2388 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:57.221593 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:43:58.132089 systemd[1]: Reload requested from client PID 2670 ('systemctl') (unit session-7.scope)... Dec 12 18:43:58.132112 systemd[1]: Reloading... Dec 12 18:43:58.297485 zram_generator::config[2717]: No configuration found. Dec 12 18:43:58.698644 systemd[1]: Reloading finished in 565 ms. Dec 12 18:43:58.739911 kubelet[2388]: I1212 18:43:58.739638 2388 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:43:58.739890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:58.762464 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:43:58.762937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:58.763009 systemd[1]: kubelet.service: Consumed 1.400s CPU time, 125.5M memory peak. Dec 12 18:43:58.766870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:59.158941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:59.170933 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:43:59.254436 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:43:59.254436 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:59.254929 kubelet[2762]: I1212 18:43:59.254496 2762 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:43:59.265930 kubelet[2762]: I1212 18:43:59.265857 2762 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:43:59.265930 kubelet[2762]: I1212 18:43:59.265883 2762 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:43:59.266378 kubelet[2762]: I1212 18:43:59.266027 2762 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:43:59.266378 kubelet[2762]: I1212 18:43:59.266046 2762 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:43:59.266705 kubelet[2762]: I1212 18:43:59.266689 2762 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:43:59.268802 kubelet[2762]: I1212 18:43:59.268774 2762 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:43:59.274432 kubelet[2762]: I1212 18:43:59.274297 2762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:43:59.279430 kubelet[2762]: I1212 18:43:59.279385 2762 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:43:59.288419 kubelet[2762]: I1212 18:43:59.287591 2762 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:43:59.288419 kubelet[2762]: I1212 18:43:59.287926 2762 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:43:59.288419 kubelet[2762]: I1212 18:43:59.287953 2762 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:43:59.288419 kubelet[2762]: I1212 18:43:59.288347 2762 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:43:59.288735 kubelet[2762]: I1212 18:43:59.288365 2762 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:43:59.290463 kubelet[2762]: I1212 18:43:59.290443 2762 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:43:59.292218 kubelet[2762]: I1212 18:43:59.292189 2762 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:59.293930 kubelet[2762]: I1212 18:43:59.293913 2762 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:43:59.294032 kubelet[2762]: I1212 18:43:59.294021 2762 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:43:59.294108 kubelet[2762]: I1212 18:43:59.294101 2762 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:43:59.294174 kubelet[2762]: I1212 18:43:59.294166 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:43:59.300196 kubelet[2762]: I1212 18:43:59.300167 2762 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:43:59.304055 kubelet[2762]: I1212 18:43:59.304026 2762 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:43:59.304149 kubelet[2762]: I1212 18:43:59.304110 2762 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:43:59.351522 kubelet[2762]: I1212 18:43:59.351480 2762 server.go:1262] "Started kubelet" Dec 12 18:43:59.353664 kubelet[2762]: I1212 18:43:59.352500 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:43:59.354427 kubelet[2762]: I1212 18:43:59.354000 2762 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:43:59.355652 kubelet[2762]: I1212 18:43:59.355624 2762 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:43:59.356172 kubelet[2762]: I1212 18:43:59.356149 2762 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:43:59.358294 kubelet[2762]: I1212 18:43:59.356556 2762 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:43:59.359257 kubelet[2762]: I1212 18:43:59.359121 2762 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:43:59.362476 kubelet[2762]: I1212 18:43:59.362416 2762 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:43:59.363581 kubelet[2762]: I1212 18:43:59.360697 2762 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:43:59.363581 kubelet[2762]: I1212 18:43:59.363086 2762 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:43:59.364235 kubelet[2762]: I1212 18:43:59.363912 2762 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:43:59.371655 kubelet[2762]: I1212 18:43:59.364558 2762 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:43:59.374622 kubelet[2762]: I1212 18:43:59.372552 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:43:59.382703 kubelet[2762]: I1212 18:43:59.382676 2762 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:43:59.390239 kubelet[2762]: E1212 18:43:59.389384 2762 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:43:59.429844 kubelet[2762]: I1212 18:43:59.428033 2762 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:43:59.437206 kubelet[2762]: I1212 18:43:59.437140 2762 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:43:59.437206 kubelet[2762]: I1212 18:43:59.437166 2762 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:43:59.437822 kubelet[2762]: I1212 18:43:59.437322 2762 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:43:59.440927 kubelet[2762]: E1212 18:43:59.440884 2762 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495309 2762 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495330 2762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495354 2762 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495563 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495579 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495603 2762 policy_none.go:49] "None policy: Start" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495616 2762 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495631 2762 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495771 2762 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 18:43:59.496267 kubelet[2762]: I1212 18:43:59.495783 2762 policy_none.go:47] "Start" Dec 12 18:43:59.508052 kubelet[2762]: E1212 18:43:59.507637 2762 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:43:59.508453 kubelet[2762]: I1212 18:43:59.508428 2762 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:43:59.508622 kubelet[2762]: I1212 18:43:59.508462 2762 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:43:59.510539 kubelet[2762]: I1212 18:43:59.509232 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:43:59.515028 kubelet[2762]: E1212 18:43:59.514971 2762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:43:59.545005 kubelet[2762]: I1212 18:43:59.543273 2762 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.545261 kubelet[2762]: I1212 18:43:59.544425 2762 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.545446 kubelet[2762]: I1212 18:43:59.544630 2762 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.554005 kubelet[2762]: I1212 18:43:59.553925 2762 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 12 18:43:59.554702 kubelet[2762]: I1212 18:43:59.554637 2762 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 12 18:43:59.558704 kubelet[2762]: I1212 18:43:59.558224 2762 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 12 18:43:59.562514 kubelet[2762]: I1212 18:43:59.561434 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87d7d48865e1347f5bd31e55fe8d19d3-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"87d7d48865e1347f5bd31e55fe8d19d3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562514 kubelet[2762]: I1212 18:43:59.561588 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87d7d48865e1347f5bd31e55fe8d19d3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"87d7d48865e1347f5bd31e55fe8d19d3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562514 kubelet[2762]: I1212 18:43:59.562057 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562514 kubelet[2762]: I1212 18:43:59.562124 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562744 kubelet[2762]: I1212 18:43:59.562184 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562744 kubelet[2762]: I1212 18:43:59.562242 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562744 kubelet[2762]: I1212 18:43:59.562290 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87d7d48865e1347f5bd31e55fe8d19d3-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"87d7d48865e1347f5bd31e55fe8d19d3\") " pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562744 kubelet[2762]: I1212 18:43:59.562324 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/125e2d11c956e76d31768dd485cc1a8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"125e2d11c956e76d31768dd485cc1a8a\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.562960 kubelet[2762]: I1212 18:43:59.562378 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/712e529fb5e367e18914f85459f4de7e-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" (UID: \"712e529fb5e367e18914f85459f4de7e\") " pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.624573 kubelet[2762]: I1212 18:43:59.623563 2762 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.632620 kubelet[2762]: I1212 18:43:59.631948 2762 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:43:59.632620 kubelet[2762]: I1212 18:43:59.632047 2762 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:00.297504 kubelet[2762]: I1212 18:44:00.297440 2762 apiserver.go:52] "Watching apiserver" Dec 12 18:44:00.357752 kubelet[2762]: I1212 18:44:00.357695 2762 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:44:00.438832 kubelet[2762]: I1212 18:44:00.438756 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" podStartSLOduration=1.438732449 podStartE2EDuration="1.438732449s" podCreationTimestamp="2025-12-12 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:44:00.426513132 +0000 UTC m=+1.250272886" watchObservedRunningTime="2025-12-12 18:44:00.438732449 +0000 UTC m=+1.262492205" Dec 12 18:44:00.451425 kubelet[2762]: I1212 18:44:00.450363 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" podStartSLOduration=1.450344897 podStartE2EDuration="1.450344897s" podCreationTimestamp="2025-12-12 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:44:00.439119487 +0000 UTC m=+1.262879234" watchObservedRunningTime="2025-12-12 18:44:00.450344897 +0000 UTC m=+1.274104651" Dec 12 18:44:00.483190 kubelet[2762]: I1212 18:44:00.482747 2762 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:00.483190 kubelet[2762]: I1212 18:44:00.482975 2762 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:00.490764 kubelet[2762]: I1212 18:44:00.490716 2762 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 12 18:44:00.490872 kubelet[2762]: E1212 18:44:00.490780 2762 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:00.492133 kubelet[2762]: I1212 18:44:00.492084 2762 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 12 18:44:00.492223 kubelet[2762]: E1212 18:44:00.492169 2762 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:00.498461 kubelet[2762]: I1212 18:44:00.498302 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" podStartSLOduration=1.498284958 podStartE2EDuration="1.498284958s" podCreationTimestamp="2025-12-12 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:44:00.451736248 +0000 UTC m=+1.275496002" watchObservedRunningTime="2025-12-12 18:44:00.498284958 +0000 UTC m=+1.322044719" Dec 12 18:44:03.362107 kubelet[2762]: I1212 18:44:03.362046 2762 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:44:03.363212 containerd[1509]: time="2025-12-12T18:44:03.363169217Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:44:03.364170 kubelet[2762]: I1212 18:44:03.363916 2762 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:44:04.608881 systemd[1]: Created slice kubepods-besteffort-podb11a97f4_f48b_47a1_936d_f7616e50c752.slice - libcontainer container kubepods-besteffort-podb11a97f4_f48b_47a1_936d_f7616e50c752.slice. Dec 12 18:44:04.695175 kubelet[2762]: I1212 18:44:04.695115 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b11a97f4-f48b-47a1-936d-f7616e50c752-kube-proxy\") pod \"kube-proxy-s7w6q\" (UID: \"b11a97f4-f48b-47a1-936d-f7616e50c752\") " pod="kube-system/kube-proxy-s7w6q" Dec 12 18:44:04.695175 kubelet[2762]: I1212 18:44:04.695172 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcm49\" (UniqueName: \"kubernetes.io/projected/b11a97f4-f48b-47a1-936d-f7616e50c752-kube-api-access-zcm49\") pod \"kube-proxy-s7w6q\" (UID: \"b11a97f4-f48b-47a1-936d-f7616e50c752\") " pod="kube-system/kube-proxy-s7w6q" Dec 12 18:44:04.695801 kubelet[2762]: I1212 18:44:04.695211 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b11a97f4-f48b-47a1-936d-f7616e50c752-xtables-lock\") pod \"kube-proxy-s7w6q\" (UID: \"b11a97f4-f48b-47a1-936d-f7616e50c752\") " pod="kube-system/kube-proxy-s7w6q" Dec 12 18:44:04.695801 kubelet[2762]: I1212 18:44:04.695234 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b11a97f4-f48b-47a1-936d-f7616e50c752-lib-modules\") pod \"kube-proxy-s7w6q\" (UID: \"b11a97f4-f48b-47a1-936d-f7616e50c752\") " pod="kube-system/kube-proxy-s7w6q" Dec 12 18:44:04.748111 systemd[1]: Created slice kubepods-besteffort-pod81666ec2_804d_4415_98a0_eebb091646b4.slice - libcontainer container kubepods-besteffort-pod81666ec2_804d_4415_98a0_eebb091646b4.slice. Dec 12 18:44:04.795527 kubelet[2762]: I1212 18:44:04.795465 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxj24\" (UniqueName: \"kubernetes.io/projected/81666ec2-804d-4415-98a0-eebb091646b4-kube-api-access-wxj24\") pod \"tigera-operator-65cdcdfd6d-7mkmk\" (UID: \"81666ec2-804d-4415-98a0-eebb091646b4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7mkmk" Dec 12 18:44:04.795689 kubelet[2762]: I1212 18:44:04.795590 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81666ec2-804d-4415-98a0-eebb091646b4-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-7mkmk\" (UID: \"81666ec2-804d-4415-98a0-eebb091646b4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7mkmk" Dec 12 18:44:04.922595 containerd[1509]: time="2025-12-12T18:44:04.922465263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7w6q,Uid:b11a97f4-f48b-47a1-936d-f7616e50c752,Namespace:kube-system,Attempt:0,}" Dec 12 18:44:04.950243 containerd[1509]: time="2025-12-12T18:44:04.950153460Z" level=info msg="connecting to shim b1ba815662cebcf5e0da5d31b847da60084663a6c9c2d82ae55e965ae9c19416" address="unix:///run/containerd/s/490fccf60bd107caed6b132a5c0c089c785d6e9d595f3f545dd3a8614087b2a0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:04.998622 systemd[1]: Started cri-containerd-b1ba815662cebcf5e0da5d31b847da60084663a6c9c2d82ae55e965ae9c19416.scope - libcontainer container b1ba815662cebcf5e0da5d31b847da60084663a6c9c2d82ae55e965ae9c19416. Dec 12 18:44:05.047700 containerd[1509]: time="2025-12-12T18:44:05.047640464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7w6q,Uid:b11a97f4-f48b-47a1-936d-f7616e50c752,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1ba815662cebcf5e0da5d31b847da60084663a6c9c2d82ae55e965ae9c19416\"" Dec 12 18:44:05.057459 containerd[1509]: time="2025-12-12T18:44:05.057034330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7mkmk,Uid:81666ec2-804d-4415-98a0-eebb091646b4,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:44:05.058124 containerd[1509]: time="2025-12-12T18:44:05.058096758Z" level=info msg="CreateContainer within sandbox \"b1ba815662cebcf5e0da5d31b847da60084663a6c9c2d82ae55e965ae9c19416\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:44:05.080827 containerd[1509]: time="2025-12-12T18:44:05.080790731Z" level=info msg="Container 50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:05.093220 containerd[1509]: time="2025-12-12T18:44:05.093168494Z" level=info msg="CreateContainer within sandbox \"b1ba815662cebcf5e0da5d31b847da60084663a6c9c2d82ae55e965ae9c19416\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25\"" Dec 12 18:44:05.094491 containerd[1509]: time="2025-12-12T18:44:05.094454497Z" level=info msg="StartContainer for \"50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25\"" Dec 12 18:44:05.098421 containerd[1509]: time="2025-12-12T18:44:05.098325019Z" level=info msg="connecting to shim 50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25" address="unix:///run/containerd/s/490fccf60bd107caed6b132a5c0c089c785d6e9d595f3f545dd3a8614087b2a0" protocol=ttrpc version=3 Dec 12 18:44:05.099248 containerd[1509]: time="2025-12-12T18:44:05.098956790Z" level=info msg="connecting to shim 3a99e5dc5479ed3ad4b11c81bc6372bdf3a864fae9703cfa7eeab919e83f0c36" address="unix:///run/containerd/s/9e4d5076ab4be397f0d142c42e5d1b98eb607f96e420bc0c1dc4ddfa89daeda9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:05.136686 systemd[1]: Started cri-containerd-50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25.scope - libcontainer container 50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25. Dec 12 18:44:05.152812 systemd[1]: Started cri-containerd-3a99e5dc5479ed3ad4b11c81bc6372bdf3a864fae9703cfa7eeab919e83f0c36.scope - libcontainer container 3a99e5dc5479ed3ad4b11c81bc6372bdf3a864fae9703cfa7eeab919e83f0c36. Dec 12 18:44:05.253570 containerd[1509]: time="2025-12-12T18:44:05.253496416Z" level=info msg="StartContainer for \"50512c4d9697dea3eab651fd41091b922fa7f248e69544c8c36915d05f036f25\" returns successfully" Dec 12 18:44:05.255948 containerd[1509]: time="2025-12-12T18:44:05.255492564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7mkmk,Uid:81666ec2-804d-4415-98a0-eebb091646b4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a99e5dc5479ed3ad4b11c81bc6372bdf3a864fae9703cfa7eeab919e83f0c36\"" Dec 12 18:44:05.259059 containerd[1509]: time="2025-12-12T18:44:05.259019113Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:44:07.089254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854875380.mount: Deactivated successfully. Dec 12 18:44:08.751434 kubelet[2762]: I1212 18:44:08.751261 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s7w6q" podStartSLOduration=4.751240446 podStartE2EDuration="4.751240446s" podCreationTimestamp="2025-12-12 18:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:44:05.522788955 +0000 UTC m=+6.346548704" watchObservedRunningTime="2025-12-12 18:44:08.751240446 +0000 UTC m=+9.575000175" Dec 12 18:44:08.776800 containerd[1509]: time="2025-12-12T18:44:08.776745241Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:08.778076 containerd[1509]: time="2025-12-12T18:44:08.777826784Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:44:08.779050 containerd[1509]: time="2025-12-12T18:44:08.779010296Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:08.781814 containerd[1509]: time="2025-12-12T18:44:08.781772982Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:08.782781 containerd[1509]: time="2025-12-12T18:44:08.782745530Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.52367549s" Dec 12 18:44:08.782944 containerd[1509]: time="2025-12-12T18:44:08.782920120Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:44:08.788142 containerd[1509]: time="2025-12-12T18:44:08.788093596Z" level=info msg="CreateContainer within sandbox \"3a99e5dc5479ed3ad4b11c81bc6372bdf3a864fae9703cfa7eeab919e83f0c36\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:44:08.796430 containerd[1509]: time="2025-12-12T18:44:08.795749940Z" level=info msg="Container b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:08.807370 containerd[1509]: time="2025-12-12T18:44:08.807319167Z" level=info msg="CreateContainer within sandbox \"3a99e5dc5479ed3ad4b11c81bc6372bdf3a864fae9703cfa7eeab919e83f0c36\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34\"" Dec 12 18:44:08.808478 containerd[1509]: time="2025-12-12T18:44:08.808154418Z" level=info msg="StartContainer for \"b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34\"" Dec 12 18:44:08.809794 containerd[1509]: time="2025-12-12T18:44:08.809761114Z" level=info msg="connecting to shim b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34" address="unix:///run/containerd/s/9e4d5076ab4be397f0d142c42e5d1b98eb607f96e420bc0c1dc4ddfa89daeda9" protocol=ttrpc version=3 Dec 12 18:44:08.837599 systemd[1]: Started cri-containerd-b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34.scope - libcontainer container b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34. Dec 12 18:44:08.881320 containerd[1509]: time="2025-12-12T18:44:08.881281816Z" level=info msg="StartContainer for \"b29349c87dadd78df03283d34e8e8408341efe649d43aa5003e0921f4c01cb34\" returns successfully" Dec 12 18:44:09.527077 kubelet[2762]: I1212 18:44:09.526906 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-7mkmk" podStartSLOduration=2.001689617 podStartE2EDuration="5.526883124s" podCreationTimestamp="2025-12-12 18:44:04 +0000 UTC" firstStartedPulling="2025-12-12 18:44:05.258599231 +0000 UTC m=+6.082358973" lastFinishedPulling="2025-12-12 18:44:08.783792754 +0000 UTC m=+9.607552480" observedRunningTime="2025-12-12 18:44:09.52645582 +0000 UTC m=+10.350215572" watchObservedRunningTime="2025-12-12 18:44:09.526883124 +0000 UTC m=+10.350642877" Dec 12 18:44:10.797545 update_engine[1493]: I20251212 18:44:10.797458 1493 update_attempter.cc:509] Updating boot flags... Dec 12 18:44:16.347705 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 12 18:44:16.394797 sshd[1830]: Connection closed by 147.75.109.163 port 33814 Dec 12 18:44:16.398670 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Dec 12 18:44:16.406633 systemd[1]: sshd@6-10.128.0.44:22-147.75.109.163:33814.service: Deactivated successfully. Dec 12 18:44:16.411419 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:44:16.411996 systemd[1]: session-7.scope: Consumed 7.064s CPU time, 228.3M memory peak. Dec 12 18:44:16.414635 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:44:16.419862 systemd-logind[1490]: Removed session 7. Dec 12 18:44:24.006145 systemd[1]: Created slice kubepods-besteffort-pod78147ff8_7fdb_4793_a219_1e8545851c2f.slice - libcontainer container kubepods-besteffort-pod78147ff8_7fdb_4793_a219_1e8545851c2f.slice. Dec 12 18:44:24.042664 kubelet[2762]: I1212 18:44:24.042550 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xzf\" (UniqueName: \"kubernetes.io/projected/78147ff8-7fdb-4793-a219-1e8545851c2f-kube-api-access-t2xzf\") pod \"calico-typha-58885d59c5-p779d\" (UID: \"78147ff8-7fdb-4793-a219-1e8545851c2f\") " pod="calico-system/calico-typha-58885d59c5-p779d" Dec 12 18:44:24.042664 kubelet[2762]: I1212 18:44:24.042609 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78147ff8-7fdb-4793-a219-1e8545851c2f-tigera-ca-bundle\") pod \"calico-typha-58885d59c5-p779d\" (UID: \"78147ff8-7fdb-4793-a219-1e8545851c2f\") " pod="calico-system/calico-typha-58885d59c5-p779d" Dec 12 18:44:24.042664 kubelet[2762]: I1212 18:44:24.042641 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/78147ff8-7fdb-4793-a219-1e8545851c2f-typha-certs\") pod \"calico-typha-58885d59c5-p779d\" (UID: \"78147ff8-7fdb-4793-a219-1e8545851c2f\") " pod="calico-system/calico-typha-58885d59c5-p779d" Dec 12 18:44:24.190386 systemd[1]: Created slice kubepods-besteffort-pod31e4d227_615d_4d14_a62c_dea66e41b101.slice - libcontainer container kubepods-besteffort-pod31e4d227_615d_4d14_a62c_dea66e41b101.slice. Dec 12 18:44:24.243385 kubelet[2762]: I1212 18:44:24.243325 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszp2\" (UniqueName: \"kubernetes.io/projected/31e4d227-615d-4d14-a62c-dea66e41b101-kube-api-access-gszp2\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243385 kubelet[2762]: I1212 18:44:24.243373 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-cni-net-dir\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243593 kubelet[2762]: I1212 18:44:24.243431 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-cni-log-dir\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243593 kubelet[2762]: I1212 18:44:24.243465 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31e4d227-615d-4d14-a62c-dea66e41b101-node-certs\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243593 kubelet[2762]: I1212 18:44:24.243489 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-policysync\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243593 kubelet[2762]: I1212 18:44:24.243510 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-xtables-lock\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243593 kubelet[2762]: I1212 18:44:24.243539 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-var-run-calico\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243857 kubelet[2762]: I1212 18:44:24.243564 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-flexvol-driver-host\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243857 kubelet[2762]: I1212 18:44:24.243591 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-lib-modules\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243857 kubelet[2762]: I1212 18:44:24.243616 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31e4d227-615d-4d14-a62c-dea66e41b101-tigera-ca-bundle\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243857 kubelet[2762]: I1212 18:44:24.243647 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-cni-bin-dir\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.243857 kubelet[2762]: I1212 18:44:24.243673 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31e4d227-615d-4d14-a62c-dea66e41b101-var-lib-calico\") pod \"calico-node-vj56x\" (UID: \"31e4d227-615d-4d14-a62c-dea66e41b101\") " pod="calico-system/calico-node-vj56x" Dec 12 18:44:24.277775 kubelet[2762]: E1212 18:44:24.275652 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:24.321424 containerd[1509]: time="2025-12-12T18:44:24.321133131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58885d59c5-p779d,Uid:78147ff8-7fdb-4793-a219-1e8545851c2f,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:24.345526 kubelet[2762]: I1212 18:44:24.344614 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8b56ee9b-eb0e-4a48-b289-fe72c1940fc8-varrun\") pod \"csi-node-driver-q4rn9\" (UID: \"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8\") " pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:24.345526 kubelet[2762]: I1212 18:44:24.345347 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbkmg\" (UniqueName: \"kubernetes.io/projected/8b56ee9b-eb0e-4a48-b289-fe72c1940fc8-kube-api-access-fbkmg\") pod \"csi-node-driver-q4rn9\" (UID: \"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8\") " pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:24.345526 kubelet[2762]: I1212 18:44:24.345476 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ee9b-eb0e-4a48-b289-fe72c1940fc8-registration-dir\") pod \"csi-node-driver-q4rn9\" (UID: \"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8\") " pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:24.347264 kubelet[2762]: I1212 18:44:24.346519 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ee9b-eb0e-4a48-b289-fe72c1940fc8-socket-dir\") pod \"csi-node-driver-q4rn9\" (UID: \"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8\") " pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:24.347264 kubelet[2762]: I1212 18:44:24.346652 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ee9b-eb0e-4a48-b289-fe72c1940fc8-kubelet-dir\") pod \"csi-node-driver-q4rn9\" (UID: \"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8\") " pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:24.360995 kubelet[2762]: E1212 18:44:24.358899 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.360995 kubelet[2762]: W1212 18:44:24.358924 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.360995 kubelet[2762]: E1212 18:44:24.358963 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.362986 kubelet[2762]: E1212 18:44:24.362964 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.363527 kubelet[2762]: W1212 18:44:24.363487 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.363700 kubelet[2762]: E1212 18:44:24.363669 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.365422 containerd[1509]: time="2025-12-12T18:44:24.365221822Z" level=info msg="connecting to shim 9b25303a3e25eab55c00ef77249746a0dc655af744e6f0179b05aa371815f8c6" address="unix:///run/containerd/s/7fd4c424315110385cbe62060f98798ed846268244b600bdc4da3519dd4685bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:24.366387 kubelet[2762]: E1212 18:44:24.366198 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.366387 kubelet[2762]: W1212 18:44:24.366218 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.366387 kubelet[2762]: E1212 18:44:24.366238 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.366747 kubelet[2762]: E1212 18:44:24.366691 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.366747 kubelet[2762]: W1212 18:44:24.366710 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.366747 kubelet[2762]: E1212 18:44:24.366729 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.372890 kubelet[2762]: E1212 18:44:24.372152 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.372890 kubelet[2762]: W1212 18:44:24.372557 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.372890 kubelet[2762]: E1212 18:44:24.372581 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.374012 kubelet[2762]: E1212 18:44:24.373680 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.374012 kubelet[2762]: W1212 18:44:24.373701 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.374012 kubelet[2762]: E1212 18:44:24.373719 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.375179 kubelet[2762]: E1212 18:44:24.374659 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.375179 kubelet[2762]: W1212 18:44:24.374680 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.375179 kubelet[2762]: E1212 18:44:24.374697 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.377478 kubelet[2762]: E1212 18:44:24.377303 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.377478 kubelet[2762]: W1212 18:44:24.377322 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.377478 kubelet[2762]: E1212 18:44:24.377339 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.379773 kubelet[2762]: E1212 18:44:24.379753 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.379977 kubelet[2762]: W1212 18:44:24.379863 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.379977 kubelet[2762]: E1212 18:44:24.379885 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.387433 kubelet[2762]: E1212 18:44:24.387323 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.387668 kubelet[2762]: W1212 18:44:24.387604 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.387668 kubelet[2762]: E1212 18:44:24.387633 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.437645 systemd[1]: Started cri-containerd-9b25303a3e25eab55c00ef77249746a0dc655af744e6f0179b05aa371815f8c6.scope - libcontainer container 9b25303a3e25eab55c00ef77249746a0dc655af744e6f0179b05aa371815f8c6. Dec 12 18:44:24.448147 kubelet[2762]: E1212 18:44:24.448113 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.448321 kubelet[2762]: W1212 18:44:24.448146 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.448321 kubelet[2762]: E1212 18:44:24.448202 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.448840 kubelet[2762]: E1212 18:44:24.448817 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.448840 kubelet[2762]: W1212 18:44:24.448840 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.449012 kubelet[2762]: E1212 18:44:24.448888 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.449507 kubelet[2762]: E1212 18:44:24.449467 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.449618 kubelet[2762]: W1212 18:44:24.449514 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.449618 kubelet[2762]: E1212 18:44:24.449534 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.450080 kubelet[2762]: E1212 18:44:24.450057 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.450080 kubelet[2762]: W1212 18:44:24.450079 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.452077 kubelet[2762]: E1212 18:44:24.450098 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.452077 kubelet[2762]: E1212 18:44:24.450663 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.452077 kubelet[2762]: W1212 18:44:24.450681 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.452077 kubelet[2762]: E1212 18:44:24.450700 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.452077 kubelet[2762]: E1212 18:44:24.451546 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.452077 kubelet[2762]: W1212 18:44:24.451564 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.452077 kubelet[2762]: E1212 18:44:24.451621 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.452526 kubelet[2762]: E1212 18:44:24.452500 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.452526 kubelet[2762]: W1212 18:44:24.452526 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.452714 kubelet[2762]: E1212 18:44:24.452546 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.454226 kubelet[2762]: E1212 18:44:24.454187 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.454226 kubelet[2762]: W1212 18:44:24.454214 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.454357 kubelet[2762]: E1212 18:44:24.454234 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.456351 kubelet[2762]: E1212 18:44:24.456325 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.456351 kubelet[2762]: W1212 18:44:24.456350 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.456580 kubelet[2762]: E1212 18:44:24.456370 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.457369 kubelet[2762]: E1212 18:44:24.457342 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.457369 kubelet[2762]: W1212 18:44:24.457367 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.457570 kubelet[2762]: E1212 18:44:24.457387 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.458015 kubelet[2762]: E1212 18:44:24.457989 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.458015 kubelet[2762]: W1212 18:44:24.458014 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.458178 kubelet[2762]: E1212 18:44:24.458033 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.458652 kubelet[2762]: E1212 18:44:24.458617 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.458652 kubelet[2762]: W1212 18:44:24.458643 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.458776 kubelet[2762]: E1212 18:44:24.458718 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.459783 kubelet[2762]: E1212 18:44:24.459761 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.459783 kubelet[2762]: W1212 18:44:24.459783 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.459990 kubelet[2762]: E1212 18:44:24.459803 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.461222 kubelet[2762]: E1212 18:44:24.461181 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.461222 kubelet[2762]: W1212 18:44:24.461208 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.461377 kubelet[2762]: E1212 18:44:24.461244 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.462595 kubelet[2762]: E1212 18:44:24.462566 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.462595 kubelet[2762]: W1212 18:44:24.462592 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.462921 kubelet[2762]: E1212 18:44:24.462612 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.463550 kubelet[2762]: E1212 18:44:24.463522 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.463550 kubelet[2762]: W1212 18:44:24.463548 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.463719 kubelet[2762]: E1212 18:44:24.463569 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.464736 kubelet[2762]: E1212 18:44:24.464713 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.464736 kubelet[2762]: W1212 18:44:24.464734 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.464896 kubelet[2762]: E1212 18:44:24.464757 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.465536 kubelet[2762]: E1212 18:44:24.465508 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.465536 kubelet[2762]: W1212 18:44:24.465533 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.465732 kubelet[2762]: E1212 18:44:24.465553 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.467928 kubelet[2762]: E1212 18:44:24.467894 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.467928 kubelet[2762]: W1212 18:44:24.467928 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.468104 kubelet[2762]: E1212 18:44:24.467949 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.468345 kubelet[2762]: E1212 18:44:24.468319 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.468345 kubelet[2762]: W1212 18:44:24.468343 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.468645 kubelet[2762]: E1212 18:44:24.468361 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.469556 kubelet[2762]: E1212 18:44:24.469531 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.469556 kubelet[2762]: W1212 18:44:24.469555 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.469735 kubelet[2762]: E1212 18:44:24.469575 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.470000 kubelet[2762]: E1212 18:44:24.469966 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.470000 kubelet[2762]: W1212 18:44:24.469991 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.470142 kubelet[2762]: E1212 18:44:24.470013 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.471796 kubelet[2762]: E1212 18:44:24.471740 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.471796 kubelet[2762]: W1212 18:44:24.471760 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.471796 kubelet[2762]: E1212 18:44:24.471780 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.472772 kubelet[2762]: E1212 18:44:24.472748 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.472772 kubelet[2762]: W1212 18:44:24.472772 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.472927 kubelet[2762]: E1212 18:44:24.472791 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.475154 kubelet[2762]: E1212 18:44:24.475108 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.475154 kubelet[2762]: W1212 18:44:24.475135 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.475154 kubelet[2762]: E1212 18:44:24.475156 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.500905 kubelet[2762]: E1212 18:44:24.500795 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:24.500905 kubelet[2762]: W1212 18:44:24.500844 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:24.500905 kubelet[2762]: E1212 18:44:24.500869 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:24.505729 containerd[1509]: time="2025-12-12T18:44:24.505686497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vj56x,Uid:31e4d227-615d-4d14-a62c-dea66e41b101,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:24.544860 containerd[1509]: time="2025-12-12T18:44:24.544522719Z" level=info msg="connecting to shim 8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f" address="unix:///run/containerd/s/15749852903b585e5d7d25c7bb7a06dd83ccfe538339a1c6ba7fe793d7e2370d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:24.605961 systemd[1]: Started cri-containerd-8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f.scope - libcontainer container 8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f. Dec 12 18:44:24.622575 containerd[1509]: time="2025-12-12T18:44:24.622528593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58885d59c5-p779d,Uid:78147ff8-7fdb-4793-a219-1e8545851c2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b25303a3e25eab55c00ef77249746a0dc655af744e6f0179b05aa371815f8c6\"" Dec 12 18:44:24.626366 containerd[1509]: time="2025-12-12T18:44:24.626096520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:44:24.686020 containerd[1509]: time="2025-12-12T18:44:24.685945433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vj56x,Uid:31e4d227-615d-4d14-a62c-dea66e41b101,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\"" Dec 12 18:44:25.440160 kubelet[2762]: E1212 18:44:25.439742 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:25.441533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495140418.mount: Deactivated successfully. Dec 12 18:44:26.569542 containerd[1509]: time="2025-12-12T18:44:26.569479149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:26.571054 containerd[1509]: time="2025-12-12T18:44:26.570802375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:44:26.572145 containerd[1509]: time="2025-12-12T18:44:26.572109868Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:26.574772 containerd[1509]: time="2025-12-12T18:44:26.574736737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:26.575728 containerd[1509]: time="2025-12-12T18:44:26.575692336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.948984758s" Dec 12 18:44:26.575825 containerd[1509]: time="2025-12-12T18:44:26.575735731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:44:26.577853 containerd[1509]: time="2025-12-12T18:44:26.577820508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:44:26.600514 containerd[1509]: time="2025-12-12T18:44:26.600477528Z" level=info msg="CreateContainer within sandbox \"9b25303a3e25eab55c00ef77249746a0dc655af744e6f0179b05aa371815f8c6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:44:26.610414 containerd[1509]: time="2025-12-12T18:44:26.610305307Z" level=info msg="Container 907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:26.624994 containerd[1509]: time="2025-12-12T18:44:26.624095625Z" level=info msg="CreateContainer within sandbox \"9b25303a3e25eab55c00ef77249746a0dc655af744e6f0179b05aa371815f8c6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586\"" Dec 12 18:44:26.627454 containerd[1509]: time="2025-12-12T18:44:26.627420193Z" level=info msg="StartContainer for \"907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586\"" Dec 12 18:44:26.630615 containerd[1509]: time="2025-12-12T18:44:26.630565231Z" level=info msg="connecting to shim 907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586" address="unix:///run/containerd/s/7fd4c424315110385cbe62060f98798ed846268244b600bdc4da3519dd4685bf" protocol=ttrpc version=3 Dec 12 18:44:26.670751 systemd[1]: Started cri-containerd-907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586.scope - libcontainer container 907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586. Dec 12 18:44:26.749880 containerd[1509]: time="2025-12-12T18:44:26.749832896Z" level=info msg="StartContainer for \"907880ed7cd6a43a0e098c5bfa1c8e2f4a954a6fde9ec10cfd871ba393538586\" returns successfully" Dec 12 18:44:27.433417 containerd[1509]: time="2025-12-12T18:44:27.433359408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:27.434485 containerd[1509]: time="2025-12-12T18:44:27.434382208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:44:27.436934 containerd[1509]: time="2025-12-12T18:44:27.435565616Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:27.438137 containerd[1509]: time="2025-12-12T18:44:27.437966103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:27.440726 containerd[1509]: time="2025-12-12T18:44:27.440663064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 862.798653ms" Dec 12 18:44:27.440726 containerd[1509]: time="2025-12-12T18:44:27.440704287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:44:27.445551 kubelet[2762]: E1212 18:44:27.445491 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:27.448110 containerd[1509]: time="2025-12-12T18:44:27.448000779Z" level=info msg="CreateContainer within sandbox \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:44:27.461390 containerd[1509]: time="2025-12-12T18:44:27.461339968Z" level=info msg="Container 7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:27.469868 containerd[1509]: time="2025-12-12T18:44:27.469813368Z" level=info msg="CreateContainer within sandbox \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1\"" Dec 12 18:44:27.470664 containerd[1509]: time="2025-12-12T18:44:27.470631437Z" level=info msg="StartContainer for \"7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1\"" Dec 12 18:44:27.473128 containerd[1509]: time="2025-12-12T18:44:27.473091800Z" level=info msg="connecting to shim 7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1" address="unix:///run/containerd/s/15749852903b585e5d7d25c7bb7a06dd83ccfe538339a1c6ba7fe793d7e2370d" protocol=ttrpc version=3 Dec 12 18:44:27.500635 systemd[1]: Started cri-containerd-7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1.scope - libcontainer container 7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1. Dec 12 18:44:27.613866 containerd[1509]: time="2025-12-12T18:44:27.613462183Z" level=info msg="StartContainer for \"7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1\" returns successfully" Dec 12 18:44:27.632890 kubelet[2762]: I1212 18:44:27.632805 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58885d59c5-p779d" podStartSLOduration=2.680328351 podStartE2EDuration="4.632784307s" podCreationTimestamp="2025-12-12 18:44:23 +0000 UTC" firstStartedPulling="2025-12-12 18:44:24.625104326 +0000 UTC m=+25.448864070" lastFinishedPulling="2025-12-12 18:44:26.577560288 +0000 UTC m=+27.401320026" observedRunningTime="2025-12-12 18:44:27.629917122 +0000 UTC m=+28.453676877" watchObservedRunningTime="2025-12-12 18:44:27.632784307 +0000 UTC m=+28.456544050" Dec 12 18:44:27.641008 kubelet[2762]: E1212 18:44:27.640850 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.641008 kubelet[2762]: W1212 18:44:27.640878 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.641688 kubelet[2762]: E1212 18:44:27.641380 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.642308 kubelet[2762]: E1212 18:44:27.642101 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.642308 kubelet[2762]: W1212 18:44:27.642151 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.642308 kubelet[2762]: E1212 18:44:27.642176 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.643272 kubelet[2762]: E1212 18:44:27.643109 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.643272 kubelet[2762]: W1212 18:44:27.643126 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.643272 kubelet[2762]: E1212 18:44:27.643146 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.644303 kubelet[2762]: E1212 18:44:27.644167 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.644303 kubelet[2762]: W1212 18:44:27.644200 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.644303 kubelet[2762]: E1212 18:44:27.644219 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.645247 kubelet[2762]: E1212 18:44:27.645120 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.645247 kubelet[2762]: W1212 18:44:27.645172 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.645247 kubelet[2762]: E1212 18:44:27.645193 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.646145 kubelet[2762]: E1212 18:44:27.646056 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.646145 kubelet[2762]: W1212 18:44:27.646075 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.646539 kubelet[2762]: E1212 18:44:27.646091 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.647306 kubelet[2762]: E1212 18:44:27.647101 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.647306 kubelet[2762]: W1212 18:44:27.647122 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.647306 kubelet[2762]: E1212 18:44:27.647140 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.648039 kubelet[2762]: E1212 18:44:27.647891 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.648039 kubelet[2762]: W1212 18:44:27.647906 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.648304 kubelet[2762]: E1212 18:44:27.647920 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.648755 kubelet[2762]: E1212 18:44:27.648733 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.648755 kubelet[2762]: W1212 18:44:27.648752 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.649174 kubelet[2762]: E1212 18:44:27.648770 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.649540 kubelet[2762]: E1212 18:44:27.649512 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.649540 kubelet[2762]: W1212 18:44:27.649534 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.649704 kubelet[2762]: E1212 18:44:27.649552 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.650908 kubelet[2762]: E1212 18:44:27.650853 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.650908 kubelet[2762]: W1212 18:44:27.650877 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.650908 kubelet[2762]: E1212 18:44:27.650896 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.651965 kubelet[2762]: E1212 18:44:27.651906 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.651965 kubelet[2762]: W1212 18:44:27.651925 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.651965 kubelet[2762]: E1212 18:44:27.651943 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.652595 kubelet[2762]: E1212 18:44:27.652515 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.652595 kubelet[2762]: W1212 18:44:27.652537 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.652595 kubelet[2762]: E1212 18:44:27.652554 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.654539 kubelet[2762]: E1212 18:44:27.654508 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.654539 kubelet[2762]: W1212 18:44:27.654532 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.654706 kubelet[2762]: E1212 18:44:27.654555 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.655270 kubelet[2762]: E1212 18:44:27.655199 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:44:27.655270 kubelet[2762]: W1212 18:44:27.655222 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:44:27.655270 kubelet[2762]: E1212 18:44:27.655238 2762 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:44:27.680894 systemd[1]: cri-containerd-7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1.scope: Deactivated successfully. Dec 12 18:44:27.691728 containerd[1509]: time="2025-12-12T18:44:27.691597554Z" level=info msg="received container exit event container_id:\"7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1\" id:\"7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1\" pid:3390 exited_at:{seconds:1765565067 nanos:688873689}" Dec 12 18:44:27.735377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bb68134f6c74cc6f74b31bdf21a2b5219ea107b7e261f7bd75fc87fc7098cc1-rootfs.mount: Deactivated successfully. Dec 12 18:44:28.614321 containerd[1509]: time="2025-12-12T18:44:28.614274086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:44:29.440756 kubelet[2762]: E1212 18:44:29.440654 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:31.438993 kubelet[2762]: E1212 18:44:31.438576 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:31.815882 containerd[1509]: time="2025-12-12T18:44:31.815821678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:31.817326 containerd[1509]: time="2025-12-12T18:44:31.817106817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:44:31.818323 containerd[1509]: time="2025-12-12T18:44:31.818279340Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:31.821167 containerd[1509]: time="2025-12-12T18:44:31.821100625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:31.822329 containerd[1509]: time="2025-12-12T18:44:31.822175978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.207845204s" Dec 12 18:44:31.822329 containerd[1509]: time="2025-12-12T18:44:31.822219395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:44:31.827421 containerd[1509]: time="2025-12-12T18:44:31.827357632Z" level=info msg="CreateContainer within sandbox \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:44:31.843455 containerd[1509]: time="2025-12-12T18:44:31.842902345Z" level=info msg="Container dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:31.854250 containerd[1509]: time="2025-12-12T18:44:31.854188238Z" level=info msg="CreateContainer within sandbox \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609\"" Dec 12 18:44:31.855999 containerd[1509]: time="2025-12-12T18:44:31.854830351Z" level=info msg="StartContainer for \"dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609\"" Dec 12 18:44:31.857133 containerd[1509]: time="2025-12-12T18:44:31.857097760Z" level=info msg="connecting to shim dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609" address="unix:///run/containerd/s/15749852903b585e5d7d25c7bb7a06dd83ccfe538339a1c6ba7fe793d7e2370d" protocol=ttrpc version=3 Dec 12 18:44:31.890653 systemd[1]: Started cri-containerd-dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609.scope - libcontainer container dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609. Dec 12 18:44:31.988017 containerd[1509]: time="2025-12-12T18:44:31.987956195Z" level=info msg="StartContainer for \"dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609\" returns successfully" Dec 12 18:44:32.955776 containerd[1509]: time="2025-12-12T18:44:32.955680321Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:44:32.961686 systemd[1]: cri-containerd-dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609.scope: Deactivated successfully. Dec 12 18:44:32.962570 systemd[1]: cri-containerd-dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609.scope: Consumed 632ms CPU time, 191M memory peak, 171.3M written to disk. Dec 12 18:44:32.964381 containerd[1509]: time="2025-12-12T18:44:32.964342103Z" level=info msg="received container exit event container_id:\"dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609\" id:\"dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609\" pid:3474 exited_at:{seconds:1765565072 nanos:963351041}" Dec 12 18:44:33.002150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dba9adf93f38b35984b3256a6f6197fab50d182e5edba9eeaa90a4f8288e0609-rootfs.mount: Deactivated successfully. Dec 12 18:44:33.065948 kubelet[2762]: I1212 18:44:33.064098 2762 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 18:44:33.175427 systemd[1]: Created slice kubepods-burstable-pod3cd42596_095e_4d39_95c6_e096d2692550.slice - libcontainer container kubepods-burstable-pod3cd42596_095e_4d39_95c6_e096d2692550.slice. Dec 12 18:44:33.237706 kubelet[2762]: I1212 18:44:33.233616 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cd42596-095e-4d39-95c6-e096d2692550-config-volume\") pod \"coredns-66bc5c9577-vmf9l\" (UID: \"3cd42596-095e-4d39-95c6-e096d2692550\") " pod="kube-system/coredns-66bc5c9577-vmf9l" Dec 12 18:44:33.237706 kubelet[2762]: I1212 18:44:33.233677 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8726\" (UniqueName: \"kubernetes.io/projected/3cd42596-095e-4d39-95c6-e096d2692550-kube-api-access-k8726\") pod \"coredns-66bc5c9577-vmf9l\" (UID: \"3cd42596-095e-4d39-95c6-e096d2692550\") " pod="kube-system/coredns-66bc5c9577-vmf9l" Dec 12 18:44:33.374088 systemd[1]: Created slice kubepods-besteffort-pod08e815af_ab83_49a2_90b9_9c46cfab01ce.slice - libcontainer container kubepods-besteffort-pod08e815af_ab83_49a2_90b9_9c46cfab01ce.slice. Dec 12 18:44:33.435269 kubelet[2762]: I1212 18:44:33.435201 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-backend-key-pair\") pod \"whisker-677cf97cbf-4xrfr\" (UID: \"08e815af-ab83-49a2-90b9-9c46cfab01ce\") " pod="calico-system/whisker-677cf97cbf-4xrfr" Dec 12 18:44:33.435269 kubelet[2762]: I1212 18:44:33.435269 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-ca-bundle\") pod \"whisker-677cf97cbf-4xrfr\" (UID: \"08e815af-ab83-49a2-90b9-9c46cfab01ce\") " pod="calico-system/whisker-677cf97cbf-4xrfr" Dec 12 18:44:33.554662 kubelet[2762]: I1212 18:44:33.435302 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8thxb\" (UniqueName: \"kubernetes.io/projected/08e815af-ab83-49a2-90b9-9c46cfab01ce-kube-api-access-8thxb\") pod \"whisker-677cf97cbf-4xrfr\" (UID: \"08e815af-ab83-49a2-90b9-9c46cfab01ce\") " pod="calico-system/whisker-677cf97cbf-4xrfr" Dec 12 18:44:33.567364 containerd[1509]: time="2025-12-12T18:44:33.566572670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmf9l,Uid:3cd42596-095e-4d39-95c6-e096d2692550,Namespace:kube-system,Attempt:0,}" Dec 12 18:44:33.572759 systemd[1]: Created slice kubepods-besteffort-pod7a3fffb7_4ae7_40ae_ba0a_4502dfe78f4a.slice - libcontainer container kubepods-besteffort-pod7a3fffb7_4ae7_40ae_ba0a_4502dfe78f4a.slice. Dec 12 18:44:33.636882 kubelet[2762]: I1212 18:44:33.636043 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxw9t\" (UniqueName: \"kubernetes.io/projected/845a4208-e218-43c1-932f-f50e27d32bf1-kube-api-access-rxw9t\") pod \"calico-apiserver-7bc859bc98-d98w5\" (UID: \"845a4208-e218-43c1-932f-f50e27d32bf1\") " pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" Dec 12 18:44:33.636882 kubelet[2762]: I1212 18:44:33.636106 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf5n9\" (UniqueName: \"kubernetes.io/projected/c6711621-2459-4031-98bb-2eedd5c212f5-kube-api-access-kf5n9\") pod \"calico-kube-controllers-5587cf7fbb-rjpxw\" (UID: \"c6711621-2459-4031-98bb-2eedd5c212f5\") " pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" Dec 12 18:44:33.636882 kubelet[2762]: I1212 18:44:33.636138 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/845a4208-e218-43c1-932f-f50e27d32bf1-calico-apiserver-certs\") pod \"calico-apiserver-7bc859bc98-d98w5\" (UID: \"845a4208-e218-43c1-932f-f50e27d32bf1\") " pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" Dec 12 18:44:33.636882 kubelet[2762]: I1212 18:44:33.636207 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a-calico-apiserver-certs\") pod \"calico-apiserver-7bc859bc98-p9lx6\" (UID: \"7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a\") " pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" Dec 12 18:44:33.636882 kubelet[2762]: I1212 18:44:33.636241 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4fv\" (UniqueName: \"kubernetes.io/projected/7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a-kube-api-access-pr4fv\") pod \"calico-apiserver-7bc859bc98-p9lx6\" (UID: \"7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a\") " pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" Dec 12 18:44:33.637264 kubelet[2762]: I1212 18:44:33.636275 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6711621-2459-4031-98bb-2eedd5c212f5-tigera-ca-bundle\") pod \"calico-kube-controllers-5587cf7fbb-rjpxw\" (UID: \"c6711621-2459-4031-98bb-2eedd5c212f5\") " pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" Dec 12 18:44:33.646916 systemd[1]: Created slice kubepods-besteffort-podc6711621_2459_4031_98bb_2eedd5c212f5.slice - libcontainer container kubepods-besteffort-podc6711621_2459_4031_98bb_2eedd5c212f5.slice. Dec 12 18:44:33.664955 systemd[1]: Created slice kubepods-besteffort-pod845a4208_e218_43c1_932f_f50e27d32bf1.slice - libcontainer container kubepods-besteffort-pod845a4208_e218_43c1_932f_f50e27d32bf1.slice. Dec 12 18:44:33.687596 systemd[1]: Created slice kubepods-besteffort-pod8b56ee9b_eb0e_4a48_b289_fe72c1940fc8.slice - libcontainer container kubepods-besteffort-pod8b56ee9b_eb0e_4a48_b289_fe72c1940fc8.slice. Dec 12 18:44:33.707108 systemd[1]: Created slice kubepods-besteffort-pod366b1ec7_f851_4339_83ca_caf896aa2049.slice - libcontainer container kubepods-besteffort-pod366b1ec7_f851_4339_83ca_caf896aa2049.slice. Dec 12 18:44:33.713571 containerd[1509]: time="2025-12-12T18:44:33.713494355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4rn9,Uid:8b56ee9b-eb0e-4a48-b289-fe72c1940fc8,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:33.715830 containerd[1509]: time="2025-12-12T18:44:33.715789506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:44:33.724449 systemd[1]: Created slice kubepods-burstable-pod1c7bcd16_486e_4e17_8ffa_b62714db730c.slice - libcontainer container kubepods-burstable-pod1c7bcd16_486e_4e17_8ffa_b62714db730c.slice. Dec 12 18:44:33.738507 kubelet[2762]: I1212 18:44:33.738465 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/366b1ec7-f851-4339-83ca-caf896aa2049-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xcgsd\" (UID: \"366b1ec7-f851-4339-83ca-caf896aa2049\") " pod="calico-system/goldmane-7c778bb748-xcgsd" Dec 12 18:44:33.738622 kubelet[2762]: I1212 18:44:33.738592 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/366b1ec7-f851-4339-83ca-caf896aa2049-goldmane-key-pair\") pod \"goldmane-7c778bb748-xcgsd\" (UID: \"366b1ec7-f851-4339-83ca-caf896aa2049\") " pod="calico-system/goldmane-7c778bb748-xcgsd" Dec 12 18:44:33.738696 kubelet[2762]: I1212 18:44:33.738664 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5sfg\" (UniqueName: \"kubernetes.io/projected/1c7bcd16-486e-4e17-8ffa-b62714db730c-kube-api-access-f5sfg\") pod \"coredns-66bc5c9577-2h7p4\" (UID: \"1c7bcd16-486e-4e17-8ffa-b62714db730c\") " pod="kube-system/coredns-66bc5c9577-2h7p4" Dec 12 18:44:33.738766 kubelet[2762]: I1212 18:44:33.738720 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgsxd\" (UniqueName: \"kubernetes.io/projected/366b1ec7-f851-4339-83ca-caf896aa2049-kube-api-access-lgsxd\") pod \"goldmane-7c778bb748-xcgsd\" (UID: \"366b1ec7-f851-4339-83ca-caf896aa2049\") " pod="calico-system/goldmane-7c778bb748-xcgsd" Dec 12 18:44:33.738766 kubelet[2762]: I1212 18:44:33.738750 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c7bcd16-486e-4e17-8ffa-b62714db730c-config-volume\") pod \"coredns-66bc5c9577-2h7p4\" (UID: \"1c7bcd16-486e-4e17-8ffa-b62714db730c\") " pod="kube-system/coredns-66bc5c9577-2h7p4" Dec 12 18:44:33.738870 kubelet[2762]: I1212 18:44:33.738782 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366b1ec7-f851-4339-83ca-caf896aa2049-config\") pod \"goldmane-7c778bb748-xcgsd\" (UID: \"366b1ec7-f851-4339-83ca-caf896aa2049\") " pod="calico-system/goldmane-7c778bb748-xcgsd" Dec 12 18:44:33.861066 containerd[1509]: time="2025-12-12T18:44:33.860744615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677cf97cbf-4xrfr,Uid:08e815af-ab83-49a2-90b9-9c46cfab01ce,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:33.907882 containerd[1509]: time="2025-12-12T18:44:33.907817153Z" level=error msg="Failed to destroy network for sandbox \"18b7bec2cb73739ea4d0b18592d194e02e3430ea43b067a05e35c608df47ed49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:33.909791 containerd[1509]: time="2025-12-12T18:44:33.909736332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmf9l,Uid:3cd42596-095e-4d39-95c6-e096d2692550,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b7bec2cb73739ea4d0b18592d194e02e3430ea43b067a05e35c608df47ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:33.911087 kubelet[2762]: E1212 18:44:33.910077 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b7bec2cb73739ea4d0b18592d194e02e3430ea43b067a05e35c608df47ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:33.911087 kubelet[2762]: E1212 18:44:33.910173 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b7bec2cb73739ea4d0b18592d194e02e3430ea43b067a05e35c608df47ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vmf9l" Dec 12 18:44:33.911087 kubelet[2762]: E1212 18:44:33.910223 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b7bec2cb73739ea4d0b18592d194e02e3430ea43b067a05e35c608df47ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vmf9l" Dec 12 18:44:33.911553 kubelet[2762]: E1212 18:44:33.910327 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vmf9l_kube-system(3cd42596-095e-4d39-95c6-e096d2692550)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vmf9l_kube-system(3cd42596-095e-4d39-95c6-e096d2692550)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18b7bec2cb73739ea4d0b18592d194e02e3430ea43b067a05e35c608df47ed49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vmf9l" podUID="3cd42596-095e-4d39-95c6-e096d2692550" Dec 12 18:44:33.923746 containerd[1509]: time="2025-12-12T18:44:33.923691317Z" level=error msg="Failed to destroy network for sandbox \"26a99f201c10f4d852a26670e12362808b1ef811caead2a8d5b3205f5b1822d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:33.926572 containerd[1509]: time="2025-12-12T18:44:33.926382799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4rn9,Uid:8b56ee9b-eb0e-4a48-b289-fe72c1940fc8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a99f201c10f4d852a26670e12362808b1ef811caead2a8d5b3205f5b1822d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:33.926977 kubelet[2762]: E1212 18:44:33.926930 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a99f201c10f4d852a26670e12362808b1ef811caead2a8d5b3205f5b1822d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:33.927101 kubelet[2762]: E1212 18:44:33.926999 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a99f201c10f4d852a26670e12362808b1ef811caead2a8d5b3205f5b1822d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:33.927101 kubelet[2762]: E1212 18:44:33.927032 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a99f201c10f4d852a26670e12362808b1ef811caead2a8d5b3205f5b1822d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q4rn9" Dec 12 18:44:33.927229 kubelet[2762]: E1212 18:44:33.927104 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26a99f201c10f4d852a26670e12362808b1ef811caead2a8d5b3205f5b1822d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:33.936513 containerd[1509]: time="2025-12-12T18:44:33.936392784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-p9lx6,Uid:7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:44:33.965906 containerd[1509]: time="2025-12-12T18:44:33.965861794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5587cf7fbb-rjpxw,Uid:c6711621-2459-4031-98bb-2eedd5c212f5,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:33.982136 containerd[1509]: time="2025-12-12T18:44:33.982057300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-d98w5,Uid:845a4208-e218-43c1-932f-f50e27d32bf1,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:44:33.997125 containerd[1509]: time="2025-12-12T18:44:33.996973593Z" level=error msg="Failed to destroy network for sandbox \"eee9507952069c93012b3c5adc47904bf628ba61ce8c1bd67412329a1b02fc5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.004016 containerd[1509]: time="2025-12-12T18:44:34.003946922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677cf97cbf-4xrfr,Uid:08e815af-ab83-49a2-90b9-9c46cfab01ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee9507952069c93012b3c5adc47904bf628ba61ce8c1bd67412329a1b02fc5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.004684 kubelet[2762]: E1212 18:44:34.004603 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee9507952069c93012b3c5adc47904bf628ba61ce8c1bd67412329a1b02fc5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.005136 kubelet[2762]: E1212 18:44:34.004896 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee9507952069c93012b3c5adc47904bf628ba61ce8c1bd67412329a1b02fc5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-677cf97cbf-4xrfr" Dec 12 18:44:34.005568 kubelet[2762]: E1212 18:44:34.005433 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee9507952069c93012b3c5adc47904bf628ba61ce8c1bd67412329a1b02fc5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-677cf97cbf-4xrfr" Dec 12 18:44:34.006033 kubelet[2762]: E1212 18:44:34.005961 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-677cf97cbf-4xrfr_calico-system(08e815af-ab83-49a2-90b9-9c46cfab01ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-677cf97cbf-4xrfr_calico-system(08e815af-ab83-49a2-90b9-9c46cfab01ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eee9507952069c93012b3c5adc47904bf628ba61ce8c1bd67412329a1b02fc5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-677cf97cbf-4xrfr" podUID="08e815af-ab83-49a2-90b9-9c46cfab01ce" Dec 12 18:44:34.031615 containerd[1509]: time="2025-12-12T18:44:34.031095808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xcgsd,Uid:366b1ec7-f851-4339-83ca-caf896aa2049,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:34.048344 containerd[1509]: time="2025-12-12T18:44:34.047564824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2h7p4,Uid:1c7bcd16-486e-4e17-8ffa-b62714db730c,Namespace:kube-system,Attempt:0,}" Dec 12 18:44:34.051971 systemd[1]: run-netns-cni\x2d31aa222b\x2dbcf1\x2d4e87\x2df29a\x2df2b555dc9189.mount: Deactivated successfully. Dec 12 18:44:34.192516 containerd[1509]: time="2025-12-12T18:44:34.192274474Z" level=error msg="Failed to destroy network for sandbox \"a2fde21a3ecb41b5a5d009de3bd7f831c65bee2030309a45c93e1b2fca558be9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.196294 containerd[1509]: time="2025-12-12T18:44:34.196220190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-p9lx6,Uid:7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fde21a3ecb41b5a5d009de3bd7f831c65bee2030309a45c93e1b2fca558be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.198641 kubelet[2762]: E1212 18:44:34.197614 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fde21a3ecb41b5a5d009de3bd7f831c65bee2030309a45c93e1b2fca558be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.198641 kubelet[2762]: E1212 18:44:34.197749 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fde21a3ecb41b5a5d009de3bd7f831c65bee2030309a45c93e1b2fca558be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" Dec 12 18:44:34.198641 kubelet[2762]: E1212 18:44:34.197788 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fde21a3ecb41b5a5d009de3bd7f831c65bee2030309a45c93e1b2fca558be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" Dec 12 18:44:34.201590 kubelet[2762]: E1212 18:44:34.199858 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc859bc98-p9lx6_calico-apiserver(7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc859bc98-p9lx6_calico-apiserver(7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2fde21a3ecb41b5a5d009de3bd7f831c65bee2030309a45c93e1b2fca558be9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:44:34.200020 systemd[1]: run-netns-cni\x2d843d2e0e\x2d7011\x2deb6c\x2d9f1d\x2dcb1d94694f29.mount: Deactivated successfully. Dec 12 18:44:34.243751 containerd[1509]: time="2025-12-12T18:44:34.243305240Z" level=error msg="Failed to destroy network for sandbox \"57ea86411c079d9fc5bec18def31f93bed23397fd385474350862a35c31a4315\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.248964 containerd[1509]: time="2025-12-12T18:44:34.248910087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xcgsd,Uid:366b1ec7-f851-4339-83ca-caf896aa2049,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ea86411c079d9fc5bec18def31f93bed23397fd385474350862a35c31a4315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.249872 kubelet[2762]: E1212 18:44:34.249455 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ea86411c079d9fc5bec18def31f93bed23397fd385474350862a35c31a4315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.249872 kubelet[2762]: E1212 18:44:34.249528 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ea86411c079d9fc5bec18def31f93bed23397fd385474350862a35c31a4315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xcgsd" Dec 12 18:44:34.249872 kubelet[2762]: E1212 18:44:34.249564 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ea86411c079d9fc5bec18def31f93bed23397fd385474350862a35c31a4315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xcgsd" Dec 12 18:44:34.250113 kubelet[2762]: E1212 18:44:34.249641 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xcgsd_calico-system(366b1ec7-f851-4339-83ca-caf896aa2049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xcgsd_calico-system(366b1ec7-f851-4339-83ca-caf896aa2049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57ea86411c079d9fc5bec18def31f93bed23397fd385474350862a35c31a4315\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:44:34.253723 systemd[1]: run-netns-cni\x2d8b3e45d4\x2da5b3\x2d7faa\x2dab7a\x2db080d9de38ca.mount: Deactivated successfully. Dec 12 18:44:34.282492 containerd[1509]: time="2025-12-12T18:44:34.282439849Z" level=error msg="Failed to destroy network for sandbox \"885293405a241a6d70d7b9604c6f5358aa312183ceae48903271f80b3e7a6ec8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.284223 containerd[1509]: time="2025-12-12T18:44:34.284169170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2h7p4,Uid:1c7bcd16-486e-4e17-8ffa-b62714db730c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"885293405a241a6d70d7b9604c6f5358aa312183ceae48903271f80b3e7a6ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.284596 kubelet[2762]: E1212 18:44:34.284539 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"885293405a241a6d70d7b9604c6f5358aa312183ceae48903271f80b3e7a6ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.284709 kubelet[2762]: E1212 18:44:34.284622 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"885293405a241a6d70d7b9604c6f5358aa312183ceae48903271f80b3e7a6ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2h7p4" Dec 12 18:44:34.284709 kubelet[2762]: E1212 18:44:34.284658 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"885293405a241a6d70d7b9604c6f5358aa312183ceae48903271f80b3e7a6ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2h7p4" Dec 12 18:44:34.284819 kubelet[2762]: E1212 18:44:34.284736 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2h7p4_kube-system(1c7bcd16-486e-4e17-8ffa-b62714db730c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2h7p4_kube-system(1c7bcd16-486e-4e17-8ffa-b62714db730c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"885293405a241a6d70d7b9604c6f5358aa312183ceae48903271f80b3e7a6ec8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2h7p4" podUID="1c7bcd16-486e-4e17-8ffa-b62714db730c" Dec 12 18:44:34.305053 containerd[1509]: time="2025-12-12T18:44:34.304996239Z" level=error msg="Failed to destroy network for sandbox \"bf18d2aec334ae65f96616819044f69c20b22f2f017eaad2d222a4aba80e596a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.307285 containerd[1509]: time="2025-12-12T18:44:34.306864845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-d98w5,Uid:845a4208-e218-43c1-932f-f50e27d32bf1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf18d2aec334ae65f96616819044f69c20b22f2f017eaad2d222a4aba80e596a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.307919 kubelet[2762]: E1212 18:44:34.307831 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf18d2aec334ae65f96616819044f69c20b22f2f017eaad2d222a4aba80e596a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.308081 kubelet[2762]: E1212 18:44:34.307942 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf18d2aec334ae65f96616819044f69c20b22f2f017eaad2d222a4aba80e596a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" Dec 12 18:44:34.308081 kubelet[2762]: E1212 18:44:34.308013 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf18d2aec334ae65f96616819044f69c20b22f2f017eaad2d222a4aba80e596a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" Dec 12 18:44:34.308216 kubelet[2762]: E1212 18:44:34.308122 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc859bc98-d98w5_calico-apiserver(845a4208-e218-43c1-932f-f50e27d32bf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc859bc98-d98w5_calico-apiserver(845a4208-e218-43c1-932f-f50e27d32bf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf18d2aec334ae65f96616819044f69c20b22f2f017eaad2d222a4aba80e596a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:44:34.311616 containerd[1509]: time="2025-12-12T18:44:34.311572034Z" level=error msg="Failed to destroy network for sandbox \"7113ef25fb5393205f2c956169d6498c40daa0fc1514788c36fb458da2c57803\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.313067 containerd[1509]: time="2025-12-12T18:44:34.313015229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5587cf7fbb-rjpxw,Uid:c6711621-2459-4031-98bb-2eedd5c212f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7113ef25fb5393205f2c956169d6498c40daa0fc1514788c36fb458da2c57803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.313304 kubelet[2762]: E1212 18:44:34.313264 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7113ef25fb5393205f2c956169d6498c40daa0fc1514788c36fb458da2c57803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:44:34.313588 kubelet[2762]: E1212 18:44:34.313332 2762 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7113ef25fb5393205f2c956169d6498c40daa0fc1514788c36fb458da2c57803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" Dec 12 18:44:34.313588 kubelet[2762]: E1212 18:44:34.313365 2762 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7113ef25fb5393205f2c956169d6498c40daa0fc1514788c36fb458da2c57803\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" Dec 12 18:44:34.313588 kubelet[2762]: E1212 18:44:34.313464 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5587cf7fbb-rjpxw_calico-system(c6711621-2459-4031-98bb-2eedd5c212f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5587cf7fbb-rjpxw_calico-system(c6711621-2459-4031-98bb-2eedd5c212f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7113ef25fb5393205f2c956169d6498c40daa0fc1514788c36fb458da2c57803\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:44:35.002340 systemd[1]: run-netns-cni\x2d3d8ac934\x2dd536\x2d81dc\x2d9663\x2d0ff5e7a3fc79.mount: Deactivated successfully. Dec 12 18:44:35.002521 systemd[1]: run-netns-cni\x2d3fe9027b\x2d242f\x2dcbe2\x2db305\x2d82b452eefd19.mount: Deactivated successfully. Dec 12 18:44:35.002616 systemd[1]: run-netns-cni\x2d9ecbcf56\x2d4c7b\x2d89fa\x2d661e\x2da173a48a1e36.mount: Deactivated successfully. Dec 12 18:44:40.544290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975174985.mount: Deactivated successfully. Dec 12 18:44:40.573192 containerd[1509]: time="2025-12-12T18:44:40.573125848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:40.574566 containerd[1509]: time="2025-12-12T18:44:40.574286046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:44:40.575704 containerd[1509]: time="2025-12-12T18:44:40.575659037Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:40.578858 containerd[1509]: time="2025-12-12T18:44:40.578814438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:44:40.579810 containerd[1509]: time="2025-12-12T18:44:40.579773554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.863571558s" Dec 12 18:44:40.579951 containerd[1509]: time="2025-12-12T18:44:40.579925491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:44:40.612442 containerd[1509]: time="2025-12-12T18:44:40.611843518Z" level=info msg="CreateContainer within sandbox \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:44:40.626423 containerd[1509]: time="2025-12-12T18:44:40.622728108Z" level=info msg="Container cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:40.644421 containerd[1509]: time="2025-12-12T18:44:40.643671839Z" level=info msg="CreateContainer within sandbox \"8a8afc46c0772c043b05c5a8d190744ea963ea01eac34394213ef5796844fc5f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d\"" Dec 12 18:44:40.647659 containerd[1509]: time="2025-12-12T18:44:40.647625454Z" level=info msg="StartContainer for \"cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d\"" Dec 12 18:44:40.650792 containerd[1509]: time="2025-12-12T18:44:40.650764753Z" level=info msg="connecting to shim cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d" address="unix:///run/containerd/s/15749852903b585e5d7d25c7bb7a06dd83ccfe538339a1c6ba7fe793d7e2370d" protocol=ttrpc version=3 Dec 12 18:44:40.689600 systemd[1]: Started cri-containerd-cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d.scope - libcontainer container cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d. Dec 12 18:44:40.794921 containerd[1509]: time="2025-12-12T18:44:40.794769828Z" level=info msg="StartContainer for \"cb187e466180de6651d717082607798bf2ee556738497bfefed748ec28e5901d\" returns successfully" Dec 12 18:44:40.910781 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:44:40.910904 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:44:41.095136 kubelet[2762]: I1212 18:44:41.095004 2762 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8thxb\" (UniqueName: \"kubernetes.io/projected/08e815af-ab83-49a2-90b9-9c46cfab01ce-kube-api-access-8thxb\") pod \"08e815af-ab83-49a2-90b9-9c46cfab01ce\" (UID: \"08e815af-ab83-49a2-90b9-9c46cfab01ce\") " Dec 12 18:44:41.097493 kubelet[2762]: I1212 18:44:41.097453 2762 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-ca-bundle\") pod \"08e815af-ab83-49a2-90b9-9c46cfab01ce\" (UID: \"08e815af-ab83-49a2-90b9-9c46cfab01ce\") " Dec 12 18:44:41.099129 kubelet[2762]: I1212 18:44:41.099093 2762 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-backend-key-pair\") pod \"08e815af-ab83-49a2-90b9-9c46cfab01ce\" (UID: \"08e815af-ab83-49a2-90b9-9c46cfab01ce\") " Dec 12 18:44:41.099505 kubelet[2762]: I1212 18:44:41.099025 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "08e815af-ab83-49a2-90b9-9c46cfab01ce" (UID: "08e815af-ab83-49a2-90b9-9c46cfab01ce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:44:41.108829 kubelet[2762]: I1212 18:44:41.108785 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e815af-ab83-49a2-90b9-9c46cfab01ce-kube-api-access-8thxb" (OuterVolumeSpecName: "kube-api-access-8thxb") pod "08e815af-ab83-49a2-90b9-9c46cfab01ce" (UID: "08e815af-ab83-49a2-90b9-9c46cfab01ce"). InnerVolumeSpecName "kube-api-access-8thxb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:44:41.110726 kubelet[2762]: I1212 18:44:41.110671 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "08e815af-ab83-49a2-90b9-9c46cfab01ce" (UID: "08e815af-ab83-49a2-90b9-9c46cfab01ce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:44:41.199947 kubelet[2762]: I1212 18:44:41.199859 2762 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-backend-key-pair\") on node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" DevicePath \"\"" Dec 12 18:44:41.199947 kubelet[2762]: I1212 18:44:41.199909 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8thxb\" (UniqueName: \"kubernetes.io/projected/08e815af-ab83-49a2-90b9-9c46cfab01ce-kube-api-access-8thxb\") on node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" DevicePath \"\"" Dec 12 18:44:41.200349 kubelet[2762]: I1212 18:44:41.200282 2762 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e815af-ab83-49a2-90b9-9c46cfab01ce-whisker-ca-bundle\") on node \"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal\" DevicePath \"\"" Dec 12 18:44:41.447949 systemd[1]: Removed slice kubepods-besteffort-pod08e815af_ab83_49a2_90b9_9c46cfab01ce.slice - libcontainer container kubepods-besteffort-pod08e815af_ab83_49a2_90b9_9c46cfab01ce.slice. Dec 12 18:44:41.544376 systemd[1]: var-lib-kubelet-pods-08e815af\x2dab83\x2d49a2\x2d90b9\x2d9c46cfab01ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8thxb.mount: Deactivated successfully. Dec 12 18:44:41.544541 systemd[1]: var-lib-kubelet-pods-08e815af\x2dab83\x2d49a2\x2d90b9\x2d9c46cfab01ce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:44:41.781345 kubelet[2762]: I1212 18:44:41.781242 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vj56x" podStartSLOduration=1.8936384290000001 podStartE2EDuration="17.781221968s" podCreationTimestamp="2025-12-12 18:44:24 +0000 UTC" firstStartedPulling="2025-12-12 18:44:24.693436229 +0000 UTC m=+25.517195974" lastFinishedPulling="2025-12-12 18:44:40.581019784 +0000 UTC m=+41.404779513" observedRunningTime="2025-12-12 18:44:41.766792918 +0000 UTC m=+42.590552690" watchObservedRunningTime="2025-12-12 18:44:41.781221968 +0000 UTC m=+42.604981721" Dec 12 18:44:41.845035 systemd[1]: Created slice kubepods-besteffort-pod07393087_27ea_4193_97b0_830a271e2225.slice - libcontainer container kubepods-besteffort-pod07393087_27ea_4193_97b0_830a271e2225.slice. Dec 12 18:44:41.905377 kubelet[2762]: I1212 18:44:41.905319 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07393087-27ea-4193-97b0-830a271e2225-whisker-backend-key-pair\") pod \"whisker-584dfb5d5c-klpbl\" (UID: \"07393087-27ea-4193-97b0-830a271e2225\") " pod="calico-system/whisker-584dfb5d5c-klpbl" Dec 12 18:44:41.905619 kubelet[2762]: I1212 18:44:41.905387 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07393087-27ea-4193-97b0-830a271e2225-whisker-ca-bundle\") pod \"whisker-584dfb5d5c-klpbl\" (UID: \"07393087-27ea-4193-97b0-830a271e2225\") " pod="calico-system/whisker-584dfb5d5c-klpbl" Dec 12 18:44:41.905619 kubelet[2762]: I1212 18:44:41.905436 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbqp\" (UniqueName: \"kubernetes.io/projected/07393087-27ea-4193-97b0-830a271e2225-kube-api-access-nmbqp\") pod \"whisker-584dfb5d5c-klpbl\" (UID: \"07393087-27ea-4193-97b0-830a271e2225\") " pod="calico-system/whisker-584dfb5d5c-klpbl" Dec 12 18:44:42.155305 containerd[1509]: time="2025-12-12T18:44:42.155174062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584dfb5d5c-klpbl,Uid:07393087-27ea-4193-97b0-830a271e2225,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:42.289644 systemd-networkd[1422]: cali46423bda393: Link UP Dec 12 18:44:42.291658 systemd-networkd[1422]: cali46423bda393: Gained carrier Dec 12 18:44:42.311739 containerd[1509]: 2025-12-12 18:44:42.190 [INFO][3801] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:44:42.311739 containerd[1509]: 2025-12-12 18:44:42.205 [INFO][3801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0 whisker-584dfb5d5c- calico-system 07393087-27ea-4193-97b0-830a271e2225 920 0 2025-12-12 18:44:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:584dfb5d5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal whisker-584dfb5d5c-klpbl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali46423bda393 [] [] }} ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-" Dec 12 18:44:42.311739 containerd[1509]: 2025-12-12 18:44:42.205 [INFO][3801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.311739 containerd[1509]: 2025-12-12 18:44:42.235 [INFO][3814] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" HandleID="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.235 [INFO][3814] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" HandleID="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"whisker-584dfb5d5c-klpbl", "timestamp":"2025-12-12 18:44:42.235267658 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.236 [INFO][3814] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.236 [INFO][3814] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.236 [INFO][3814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.244 [INFO][3814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.249 [INFO][3814] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.254 [INFO][3814] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312050 containerd[1509]: 2025-12-12 18:44:42.259 [INFO][3814] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.261 [INFO][3814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.261 [INFO][3814] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.263 [INFO][3814] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.267 [INFO][3814] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.274 [INFO][3814] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.193/26] block=192.168.110.192/26 handle="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.274 [INFO][3814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.193/26] handle="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.274 [INFO][3814] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:42.312307 containerd[1509]: 2025-12-12 18:44:42.274 [INFO][3814] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.193/26] IPv6=[] ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" HandleID="k8s-pod-network.ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.312596 containerd[1509]: 2025-12-12 18:44:42.278 [INFO][3801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0", GenerateName:"whisker-584dfb5d5c-", Namespace:"calico-system", SelfLink:"", UID:"07393087-27ea-4193-97b0-830a271e2225", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"584dfb5d5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-584dfb5d5c-klpbl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali46423bda393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:42.312682 containerd[1509]: 2025-12-12 18:44:42.278 [INFO][3801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.193/32] ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.312682 containerd[1509]: 2025-12-12 18:44:42.278 [INFO][3801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46423bda393 ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.312682 containerd[1509]: 2025-12-12 18:44:42.292 [INFO][3801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.312786 containerd[1509]: 2025-12-12 18:44:42.293 [INFO][3801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0", GenerateName:"whisker-584dfb5d5c-", Namespace:"calico-system", SelfLink:"", UID:"07393087-27ea-4193-97b0-830a271e2225", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"584dfb5d5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b", Pod:"whisker-584dfb5d5c-klpbl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali46423bda393", MAC:"ba:90:a0:ad:3e:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:42.312858 containerd[1509]: 2025-12-12 18:44:42.305 [INFO][3801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" Namespace="calico-system" Pod="whisker-584dfb5d5c-klpbl" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-whisker--584dfb5d5c--klpbl-eth0" Dec 12 18:44:42.352478 containerd[1509]: time="2025-12-12T18:44:42.352418731Z" level=info msg="connecting to shim ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b" address="unix:///run/containerd/s/bfffe5df6b5e1960d01b3a7fc3ed55c4005416f21b67e282268bb7b4899bfa42" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:42.396768 systemd[1]: Started cri-containerd-ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b.scope - libcontainer container ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b. Dec 12 18:44:42.576617 containerd[1509]: time="2025-12-12T18:44:42.576515768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584dfb5d5c-klpbl,Uid:07393087-27ea-4193-97b0-830a271e2225,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef920e32dac81786561fc07019179564730e2c4a5695eb44484a1f77a2db398b\"" Dec 12 18:44:42.580604 containerd[1509]: time="2025-12-12T18:44:42.580572142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:44:42.744952 containerd[1509]: time="2025-12-12T18:44:42.744752641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:42.746749 containerd[1509]: time="2025-12-12T18:44:42.746655353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:44:42.747594 containerd[1509]: time="2025-12-12T18:44:42.746922635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:44:42.747788 kubelet[2762]: E1212 18:44:42.747744 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:42.748979 kubelet[2762]: E1212 18:44:42.748476 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:42.748979 kubelet[2762]: E1212 18:44:42.748598 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-584dfb5d5c-klpbl_calico-system(07393087-27ea-4193-97b0-830a271e2225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:42.750492 containerd[1509]: time="2025-12-12T18:44:42.750379528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:44:42.911450 containerd[1509]: time="2025-12-12T18:44:42.910557227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:42.912372 containerd[1509]: time="2025-12-12T18:44:42.912308664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:44:42.912671 containerd[1509]: time="2025-12-12T18:44:42.912436491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:44:42.912937 kubelet[2762]: E1212 18:44:42.912879 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:42.913058 kubelet[2762]: E1212 18:44:42.912997 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:42.913321 kubelet[2762]: E1212 18:44:42.913184 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-584dfb5d5c-klpbl_calico-system(07393087-27ea-4193-97b0-830a271e2225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:42.913954 kubelet[2762]: E1212 18:44:42.913425 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:44:43.430493 systemd-networkd[1422]: vxlan.calico: Link UP Dec 12 18:44:43.430507 systemd-networkd[1422]: vxlan.calico: Gained carrier Dec 12 18:44:43.449890 kubelet[2762]: I1212 18:44:43.449528 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e815af-ab83-49a2-90b9-9c46cfab01ce" path="/var/lib/kubelet/pods/08e815af-ab83-49a2-90b9-9c46cfab01ce/volumes" Dec 12 18:44:43.628068 systemd-networkd[1422]: cali46423bda393: Gained IPv6LL Dec 12 18:44:43.759346 kubelet[2762]: E1212 18:44:43.759111 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:44:44.441421 containerd[1509]: time="2025-12-12T18:44:44.441363627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5587cf7fbb-rjpxw,Uid:c6711621-2459-4031-98bb-2eedd5c212f5,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:44.587940 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Dec 12 18:44:44.623524 systemd-networkd[1422]: calie995bdff64e: Link UP Dec 12 18:44:44.625778 systemd-networkd[1422]: calie995bdff64e: Gained carrier Dec 12 18:44:44.652042 containerd[1509]: 2025-12-12 18:44:44.506 [INFO][4120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0 calico-kube-controllers-5587cf7fbb- calico-system c6711621-2459-4031-98bb-2eedd5c212f5 857 0 2025-12-12 18:44:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5587cf7fbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal calico-kube-controllers-5587cf7fbb-rjpxw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie995bdff64e [] [] }} ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-" Dec 12 18:44:44.652042 containerd[1509]: 2025-12-12 18:44:44.506 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.652042 containerd[1509]: 2025-12-12 18:44:44.566 [INFO][4133] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" HandleID="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.566 [INFO][4133] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" HandleID="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5587cf7fbb-rjpxw", "timestamp":"2025-12-12 18:44:44.566589227 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.567 [INFO][4133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.567 [INFO][4133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.567 [INFO][4133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.579 [INFO][4133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.586 [INFO][4133] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.593 [INFO][4133] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.652341 containerd[1509]: 2025-12-12 18:44:44.595 [INFO][4133] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.599 [INFO][4133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.600 [INFO][4133] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.602 [INFO][4133] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7 Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.608 [INFO][4133] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.615 [INFO][4133] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.194/26] block=192.168.110.192/26 handle="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.615 [INFO][4133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.194/26] handle="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.615 [INFO][4133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:44.654352 containerd[1509]: 2025-12-12 18:44:44.615 [INFO][4133] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.194/26] IPv6=[] ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" HandleID="k8s-pod-network.fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.656181 containerd[1509]: 2025-12-12 18:44:44.618 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0", GenerateName:"calico-kube-controllers-5587cf7fbb-", Namespace:"calico-system", SelfLink:"", UID:"c6711621-2459-4031-98bb-2eedd5c212f5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5587cf7fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5587cf7fbb-rjpxw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie995bdff64e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:44.656315 containerd[1509]: 2025-12-12 18:44:44.618 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.194/32] ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.656315 containerd[1509]: 2025-12-12 18:44:44.619 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie995bdff64e ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.656315 containerd[1509]: 2025-12-12 18:44:44.626 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.657130 containerd[1509]: 2025-12-12 18:44:44.628 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0", GenerateName:"calico-kube-controllers-5587cf7fbb-", Namespace:"calico-system", SelfLink:"", UID:"c6711621-2459-4031-98bb-2eedd5c212f5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5587cf7fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7", Pod:"calico-kube-controllers-5587cf7fbb-rjpxw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie995bdff64e", MAC:"9a:fe:29:11:f0:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:44.657130 containerd[1509]: 2025-12-12 18:44:44.646 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" Namespace="calico-system" Pod="calico-kube-controllers-5587cf7fbb-rjpxw" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--kube--controllers--5587cf7fbb--rjpxw-eth0" Dec 12 18:44:44.697494 containerd[1509]: time="2025-12-12T18:44:44.697290488Z" level=info msg="connecting to shim fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7" address="unix:///run/containerd/s/bc6281df0a8786132a20357380e4478953d9e1d4a53e5f784984afb047477e14" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:44.746593 systemd[1]: Started cri-containerd-fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7.scope - libcontainer container fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7. Dec 12 18:44:44.817290 containerd[1509]: time="2025-12-12T18:44:44.817241269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5587cf7fbb-rjpxw,Uid:c6711621-2459-4031-98bb-2eedd5c212f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc49f0c20c3f59e80a5b93ddfb840aaccd7803e2909e8988755adb6b58ed8cb7\"" Dec 12 18:44:44.820261 containerd[1509]: time="2025-12-12T18:44:44.820195698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:44:44.972391 containerd[1509]: time="2025-12-12T18:44:44.972252692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:44.973584 containerd[1509]: time="2025-12-12T18:44:44.973530450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:44:44.973960 containerd[1509]: time="2025-12-12T18:44:44.973664498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:44:44.974145 kubelet[2762]: E1212 18:44:44.973962 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:44:44.974145 kubelet[2762]: E1212 18:44:44.974113 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:44:44.974733 kubelet[2762]: E1212 18:44:44.974513 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5587cf7fbb-rjpxw_calico-system(c6711621-2459-4031-98bb-2eedd5c212f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:44.974733 kubelet[2762]: E1212 18:44:44.974689 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:44:45.441691 containerd[1509]: time="2025-12-12T18:44:45.441623439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4rn9,Uid:8b56ee9b-eb0e-4a48-b289-fe72c1940fc8,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:45.570222 systemd-networkd[1422]: cali66f5bd1a7ba: Link UP Dec 12 18:44:45.572516 systemd-networkd[1422]: cali66f5bd1a7ba: Gained carrier Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.496 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0 csi-node-driver- calico-system 8b56ee9b-eb0e-4a48-b289-fe72c1940fc8 742 0 2025-12-12 18:44:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal csi-node-driver-q4rn9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66f5bd1a7ba [] [] }} ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.496 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.529 [INFO][4206] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" HandleID="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.529 [INFO][4206] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" HandleID="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"csi-node-driver-q4rn9", "timestamp":"2025-12-12 18:44:45.529582601 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.529 [INFO][4206] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.529 [INFO][4206] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.530 [INFO][4206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.537 [INFO][4206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.541 [INFO][4206] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.546 [INFO][4206] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.547 [INFO][4206] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.550 [INFO][4206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.550 [INFO][4206] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.551 [INFO][4206] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.556 [INFO][4206] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.562 [INFO][4206] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.195/26] block=192.168.110.192/26 handle="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.562 [INFO][4206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.195/26] handle="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.562 [INFO][4206] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:45.590622 containerd[1509]: 2025-12-12 18:44:45.562 [INFO][4206] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.195/26] IPv6=[] ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" HandleID="k8s-pod-network.97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.593776 containerd[1509]: 2025-12-12 18:44:45.566 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-q4rn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66f5bd1a7ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:45.593776 containerd[1509]: 2025-12-12 18:44:45.566 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.195/32] ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.593776 containerd[1509]: 2025-12-12 18:44:45.566 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66f5bd1a7ba ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.593776 containerd[1509]: 2025-12-12 18:44:45.573 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.593776 containerd[1509]: 2025-12-12 18:44:45.574 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b56ee9b-eb0e-4a48-b289-fe72c1940fc8", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b", Pod:"csi-node-driver-q4rn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66f5bd1a7ba", MAC:"1e:bf:7c:79:be:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:45.593776 containerd[1509]: 2025-12-12 18:44:45.588 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" Namespace="calico-system" Pod="csi-node-driver-q4rn9" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-csi--node--driver--q4rn9-eth0" Dec 12 18:44:45.628463 containerd[1509]: time="2025-12-12T18:44:45.628296146Z" level=info msg="connecting to shim 97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b" address="unix:///run/containerd/s/993623e005107e43ed8b98792fe7346c21a63206698913f32ef03a91f8610230" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:45.671620 systemd[1]: Started cri-containerd-97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b.scope - libcontainer container 97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b. Dec 12 18:44:45.716316 containerd[1509]: time="2025-12-12T18:44:45.716231297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4rn9,Uid:8b56ee9b-eb0e-4a48-b289-fe72c1940fc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"97d841b0d0e6019f3a95bcfc0ecd237f3aec258113e03143315730c25ab4375b\"" Dec 12 18:44:45.718907 containerd[1509]: time="2025-12-12T18:44:45.718871183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:44:45.766839 kubelet[2762]: E1212 18:44:45.765805 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:44:45.868088 systemd-networkd[1422]: calie995bdff64e: Gained IPv6LL Dec 12 18:44:45.873471 containerd[1509]: time="2025-12-12T18:44:45.873427464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:45.874808 containerd[1509]: time="2025-12-12T18:44:45.874750019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:44:45.874912 containerd[1509]: time="2025-12-12T18:44:45.874838318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:44:45.875044 kubelet[2762]: E1212 18:44:45.875000 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:44:45.875044 kubelet[2762]: E1212 18:44:45.875051 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:44:45.875284 kubelet[2762]: E1212 18:44:45.875134 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:45.876735 containerd[1509]: time="2025-12-12T18:44:45.876695021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:44:46.029322 containerd[1509]: time="2025-12-12T18:44:46.029276071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:46.030615 containerd[1509]: time="2025-12-12T18:44:46.030548780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:44:46.030743 containerd[1509]: time="2025-12-12T18:44:46.030652268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:44:46.030929 kubelet[2762]: E1212 18:44:46.030864 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:44:46.031534 kubelet[2762]: E1212 18:44:46.030927 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:44:46.031534 kubelet[2762]: E1212 18:44:46.031020 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:46.031534 kubelet[2762]: E1212 18:44:46.031087 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:46.442694 containerd[1509]: time="2025-12-12T18:44:46.442453555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmf9l,Uid:3cd42596-095e-4d39-95c6-e096d2692550,Namespace:kube-system,Attempt:0,}" Dec 12 18:44:46.648582 systemd-networkd[1422]: cali96af04f42d4: Link UP Dec 12 18:44:46.650022 systemd-networkd[1422]: cali96af04f42d4: Gained carrier Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.499 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0 coredns-66bc5c9577- kube-system 3cd42596-095e-4d39-95c6-e096d2692550 849 0 2025-12-12 18:44:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal coredns-66bc5c9577-vmf9l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96af04f42d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.499 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.577 [INFO][4281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" HandleID="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.578 [INFO][4281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" HandleID="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ce150), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"coredns-66bc5c9577-vmf9l", "timestamp":"2025-12-12 18:44:46.57714754 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.578 [INFO][4281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.578 [INFO][4281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.578 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.594 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.601 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.607 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.609 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.613 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.615 [INFO][4281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.617 [INFO][4281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18 Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.625 [INFO][4281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.637 [INFO][4281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.196/26] block=192.168.110.192/26 handle="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.637 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.196/26] handle="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.637 [INFO][4281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:46.689593 containerd[1509]: 2025-12-12 18:44:46.637 [INFO][4281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.196/26] IPv6=[] ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" HandleID="k8s-pod-network.1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.690751 containerd[1509]: 2025-12-12 18:44:46.640 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3cd42596-095e-4d39-95c6-e096d2692550", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-66bc5c9577-vmf9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96af04f42d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:46.690751 containerd[1509]: 2025-12-12 18:44:46.640 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.196/32] ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.690751 containerd[1509]: 2025-12-12 18:44:46.641 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96af04f42d4 ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.690751 containerd[1509]: 2025-12-12 18:44:46.650 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.691035 containerd[1509]: 2025-12-12 18:44:46.652 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3cd42596-095e-4d39-95c6-e096d2692550", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18", Pod:"coredns-66bc5c9577-vmf9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96af04f42d4", MAC:"26:a9:65:e8:7f:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:46.691035 containerd[1509]: 2025-12-12 18:44:46.685 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" Namespace="kube-system" Pod="coredns-66bc5c9577-vmf9l" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vmf9l-eth0" Dec 12 18:44:46.734651 containerd[1509]: time="2025-12-12T18:44:46.734524008Z" level=info msg="connecting to shim 1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18" address="unix:///run/containerd/s/e95fc3a3341b417b13f9e33e4c4c4ba0c9c5bec6da5ab4d6eed9f76a3e5dbf89" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:46.779749 systemd[1]: Started cri-containerd-1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18.scope - libcontainer container 1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18. Dec 12 18:44:46.783315 kubelet[2762]: E1212 18:44:46.783239 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:44:46.785463 kubelet[2762]: E1212 18:44:46.785391 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:44:46.874763 containerd[1509]: time="2025-12-12T18:44:46.874706886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmf9l,Uid:3cd42596-095e-4d39-95c6-e096d2692550,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18\"" Dec 12 18:44:46.880966 containerd[1509]: time="2025-12-12T18:44:46.880910027Z" level=info msg="CreateContainer within sandbox \"1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:44:46.893108 containerd[1509]: time="2025-12-12T18:44:46.893059424Z" level=info msg="Container 849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:46.903712 containerd[1509]: time="2025-12-12T18:44:46.903617501Z" level=info msg="CreateContainer within sandbox \"1b78cd191210d03903729a5fb250d7f197e48d96bc467144cec7fa26b9b5fe18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c\"" Dec 12 18:44:46.904591 containerd[1509]: time="2025-12-12T18:44:46.904523363Z" level=info msg="StartContainer for \"849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c\"" Dec 12 18:44:46.906083 containerd[1509]: time="2025-12-12T18:44:46.905999494Z" level=info msg="connecting to shim 849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c" address="unix:///run/containerd/s/e95fc3a3341b417b13f9e33e4c4c4ba0c9c5bec6da5ab4d6eed9f76a3e5dbf89" protocol=ttrpc version=3 Dec 12 18:44:46.930607 systemd[1]: Started cri-containerd-849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c.scope - libcontainer container 849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c. Dec 12 18:44:46.979715 containerd[1509]: time="2025-12-12T18:44:46.979671353Z" level=info msg="StartContainer for \"849d0f106863e3ffb280b0cc42f961b93a2479a3475aae9c0b3fd125d7a5980c\" returns successfully" Dec 12 18:44:47.020695 systemd-networkd[1422]: cali66f5bd1a7ba: Gained IPv6LL Dec 12 18:44:47.441902 containerd[1509]: time="2025-12-12T18:44:47.441592554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xcgsd,Uid:366b1ec7-f851-4339-83ca-caf896aa2049,Namespace:calico-system,Attempt:0,}" Dec 12 18:44:47.605140 systemd-networkd[1422]: cali0df94669301: Link UP Dec 12 18:44:47.607247 systemd-networkd[1422]: cali0df94669301: Gained carrier Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.498 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0 goldmane-7c778bb748- calico-system 366b1ec7-f851-4339-83ca-caf896aa2049 856 0 2025-12-12 18:44:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal goldmane-7c778bb748-xcgsd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0df94669301 [] [] }} ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.499 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.532 [INFO][4392] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" HandleID="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.533 [INFO][4392] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" HandleID="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"goldmane-7c778bb748-xcgsd", "timestamp":"2025-12-12 18:44:47.532948939 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.533 [INFO][4392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.533 [INFO][4392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.533 [INFO][4392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.544 [INFO][4392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.556 [INFO][4392] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.563 [INFO][4392] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.567 [INFO][4392] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.573 [INFO][4392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.574 [INFO][4392] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.577 [INFO][4392] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.583 [INFO][4392] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.594 [INFO][4392] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.197/26] block=192.168.110.192/26 handle="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.594 [INFO][4392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.197/26] handle="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.594 [INFO][4392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:47.640736 containerd[1509]: 2025-12-12 18:44:47.594 [INFO][4392] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.197/26] IPv6=[] ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" HandleID="k8s-pod-network.2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.643372 containerd[1509]: 2025-12-12 18:44:47.598 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"366b1ec7-f851-4339-83ca-caf896aa2049", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-7c778bb748-xcgsd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0df94669301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:47.643372 containerd[1509]: 2025-12-12 18:44:47.599 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.197/32] ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.643372 containerd[1509]: 2025-12-12 18:44:47.599 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0df94669301 ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.643372 containerd[1509]: 2025-12-12 18:44:47.607 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.643372 containerd[1509]: 2025-12-12 18:44:47.608 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"366b1ec7-f851-4339-83ca-caf896aa2049", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef", Pod:"goldmane-7c778bb748-xcgsd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0df94669301", MAC:"52:61:bd:34:12:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:47.643372 containerd[1509]: 2025-12-12 18:44:47.636 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" Namespace="calico-system" Pod="goldmane-7c778bb748-xcgsd" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--xcgsd-eth0" Dec 12 18:44:47.685623 containerd[1509]: time="2025-12-12T18:44:47.685479334Z" level=info msg="connecting to shim 2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef" address="unix:///run/containerd/s/1cad5cc8ec796334e02ae93fa474c5a6eebe122db75bee99442c3d2a028eba31" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:47.726611 systemd-networkd[1422]: cali96af04f42d4: Gained IPv6LL Dec 12 18:44:47.750637 systemd[1]: Started cri-containerd-2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef.scope - libcontainer container 2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef. Dec 12 18:44:47.795609 kubelet[2762]: I1212 18:44:47.795537 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vmf9l" podStartSLOduration=43.795492919 podStartE2EDuration="43.795492919s" podCreationTimestamp="2025-12-12 18:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:44:47.794191762 +0000 UTC m=+48.617951516" watchObservedRunningTime="2025-12-12 18:44:47.795492919 +0000 UTC m=+48.619252672" Dec 12 18:44:47.875501 containerd[1509]: time="2025-12-12T18:44:47.875376595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xcgsd,Uid:366b1ec7-f851-4339-83ca-caf896aa2049,Namespace:calico-system,Attempt:0,} returns sandbox id \"2781c854918762dc1ab4693045fb271924c6c14f7aa6fe0eec21fe83226897ef\"" Dec 12 18:44:47.879444 containerd[1509]: time="2025-12-12T18:44:47.879365100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:44:48.034433 containerd[1509]: time="2025-12-12T18:44:48.034349911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:48.036426 containerd[1509]: time="2025-12-12T18:44:48.036081463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:44:48.036426 containerd[1509]: time="2025-12-12T18:44:48.036130269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:48.036753 kubelet[2762]: E1212 18:44:48.036645 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:44:48.037042 kubelet[2762]: E1212 18:44:48.036772 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:44:48.037150 kubelet[2762]: E1212 18:44:48.037074 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xcgsd_calico-system(366b1ec7-f851-4339-83ca-caf896aa2049): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:48.037313 kubelet[2762]: E1212 18:44:48.037243 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:44:48.441970 containerd[1509]: time="2025-12-12T18:44:48.441549889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2h7p4,Uid:1c7bcd16-486e-4e17-8ffa-b62714db730c,Namespace:kube-system,Attempt:0,}" Dec 12 18:44:48.443263 containerd[1509]: time="2025-12-12T18:44:48.443131866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-p9lx6,Uid:7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:44:48.651374 systemd-networkd[1422]: cali1d02e47e280: Link UP Dec 12 18:44:48.651765 systemd-networkd[1422]: cali1d02e47e280: Gained carrier Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.539 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0 calico-apiserver-7bc859bc98- calico-apiserver 7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a 852 0 2025-12-12 18:44:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc859bc98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal calico-apiserver-7bc859bc98-p9lx6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d02e47e280 [] [] }} ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.540 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.594 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" HandleID="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.595 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" HandleID="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"calico-apiserver-7bc859bc98-p9lx6", "timestamp":"2025-12-12 18:44:48.59458242 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.595 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.596 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.596 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.607 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.613 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.618 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.620 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.623 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.623 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.625 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.629 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.637 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.198/26] block=192.168.110.192/26 handle="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.637 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.198/26] handle="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.637 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:48.673499 containerd[1509]: 2025-12-12 18:44:48.637 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.198/26] IPv6=[] ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" HandleID="k8s-pod-network.df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.674936 containerd[1509]: 2025-12-12 18:44:48.643 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0", GenerateName:"calico-apiserver-7bc859bc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc859bc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7bc859bc98-p9lx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d02e47e280", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:48.674936 containerd[1509]: 2025-12-12 18:44:48.644 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.198/32] ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.674936 containerd[1509]: 2025-12-12 18:44:48.644 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d02e47e280 ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.674936 containerd[1509]: 2025-12-12 18:44:48.652 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.674936 containerd[1509]: 2025-12-12 18:44:48.653 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0", GenerateName:"calico-apiserver-7bc859bc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc859bc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be", Pod:"calico-apiserver-7bc859bc98-p9lx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d02e47e280", MAC:"2a:55:90:a9:88:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:48.674936 containerd[1509]: 2025-12-12 18:44:48.670 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-p9lx6" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--p9lx6-eth0" Dec 12 18:44:48.732708 containerd[1509]: time="2025-12-12T18:44:48.731958036Z" level=info msg="connecting to shim df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be" address="unix:///run/containerd/s/baef8a38cba0192efb262cde62f4c36092b805f1a1bd37e260892d89d88abd6c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:48.793814 kubelet[2762]: E1212 18:44:48.793553 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:44:48.818874 systemd-networkd[1422]: calie5c0d42f287: Link UP Dec 12 18:44:48.822017 systemd-networkd[1422]: calie5c0d42f287: Gained carrier Dec 12 18:44:48.822593 systemd[1]: Started cri-containerd-df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be.scope - libcontainer container df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be. Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.544 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0 coredns-66bc5c9577- kube-system 1c7bcd16-486e-4e17-8ffa-b62714db730c 855 0 2025-12-12 18:44:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal coredns-66bc5c9577-2h7p4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie5c0d42f287 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.545 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.601 [INFO][4487] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" HandleID="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.602 [INFO][4487] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" HandleID="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"coredns-66bc5c9577-2h7p4", "timestamp":"2025-12-12 18:44:48.601916509 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.604 [INFO][4487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.637 [INFO][4487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.638 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.714 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.728 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.740 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.744 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.748 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.748 [INFO][4487] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.753 [INFO][4487] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049 Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.778 [INFO][4487] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.800 [INFO][4487] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.199/26] block=192.168.110.192/26 handle="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.800 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.199/26] handle="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.801 [INFO][4487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:48.858646 containerd[1509]: 2025-12-12 18:44:48.802 [INFO][4487] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.199/26] IPv6=[] ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" HandleID="k8s-pod-network.3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.860927 containerd[1509]: 2025-12-12 18:44:48.812 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1c7bcd16-486e-4e17-8ffa-b62714db730c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-66bc5c9577-2h7p4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5c0d42f287", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:48.860927 containerd[1509]: 2025-12-12 18:44:48.812 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.199/32] ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.860927 containerd[1509]: 2025-12-12 18:44:48.812 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5c0d42f287 ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.860927 containerd[1509]: 2025-12-12 18:44:48.818 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.861220 containerd[1509]: 2025-12-12 18:44:48.819 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1c7bcd16-486e-4e17-8ffa-b62714db730c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049", Pod:"coredns-66bc5c9577-2h7p4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5c0d42f287", MAC:"52:94:d2:8f:38:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:48.861220 containerd[1509]: 2025-12-12 18:44:48.846 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" Namespace="kube-system" Pod="coredns-66bc5c9577-2h7p4" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--2h7p4-eth0" Dec 12 18:44:48.919442 containerd[1509]: time="2025-12-12T18:44:48.919121820Z" level=info msg="connecting to shim 3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049" address="unix:///run/containerd/s/72c1ebf9faaa1dad96a99926b3931222cc9e467a763fc6d1b3a4bbfbc3701d50" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:48.972679 systemd[1]: Started cri-containerd-3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049.scope - libcontainer container 3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049. Dec 12 18:44:49.061593 containerd[1509]: time="2025-12-12T18:44:49.061378979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2h7p4,Uid:1c7bcd16-486e-4e17-8ffa-b62714db730c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049\"" Dec 12 18:44:49.075241 containerd[1509]: time="2025-12-12T18:44:49.075140942Z" level=info msg="CreateContainer within sandbox \"3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:44:49.091123 containerd[1509]: time="2025-12-12T18:44:49.090150843Z" level=info msg="Container 4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:44:49.098149 containerd[1509]: time="2025-12-12T18:44:49.098110682Z" level=info msg="CreateContainer within sandbox \"3fca24f9763ad0f70928082de2f3ebef45e6e7fb4d372a5528ab34118f495049\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5\"" Dec 12 18:44:49.098990 containerd[1509]: time="2025-12-12T18:44:49.098911651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-p9lx6,Uid:7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df526950e43729c4b82dca675a5d9539d9700e79d621e306a8fef17e37c092be\"" Dec 12 18:44:49.101541 containerd[1509]: time="2025-12-12T18:44:49.099817679Z" level=info msg="StartContainer for \"4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5\"" Dec 12 18:44:49.106132 containerd[1509]: time="2025-12-12T18:44:49.105952758Z" level=info msg="connecting to shim 4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5" address="unix:///run/containerd/s/72c1ebf9faaa1dad96a99926b3931222cc9e467a763fc6d1b3a4bbfbc3701d50" protocol=ttrpc version=3 Dec 12 18:44:49.107474 containerd[1509]: time="2025-12-12T18:44:49.107292111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:44:49.147610 systemd[1]: Started cri-containerd-4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5.scope - libcontainer container 4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5. Dec 12 18:44:49.190450 containerd[1509]: time="2025-12-12T18:44:49.190382031Z" level=info msg="StartContainer for \"4ab953a25d265947d6cf393e0828e1e08f8ac0f431d164d099799dd4a13747f5\" returns successfully" Dec 12 18:44:49.277512 containerd[1509]: time="2025-12-12T18:44:49.277438726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:49.278933 containerd[1509]: time="2025-12-12T18:44:49.278798941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:44:49.278933 containerd[1509]: time="2025-12-12T18:44:49.278851149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:49.279223 kubelet[2762]: E1212 18:44:49.279164 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:49.279739 kubelet[2762]: E1212 18:44:49.279224 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:49.279739 kubelet[2762]: E1212 18:44:49.279332 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7bc859bc98-p9lx6_calico-apiserver(7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:49.279739 kubelet[2762]: E1212 18:44:49.279472 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:44:49.445201 containerd[1509]: time="2025-12-12T18:44:49.445053870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-d98w5,Uid:845a4208-e218-43c1-932f-f50e27d32bf1,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:44:49.641115 systemd-networkd[1422]: calic85ad1719e4: Link UP Dec 12 18:44:49.642385 systemd-networkd[1422]: calic85ad1719e4: Gained carrier Dec 12 18:44:49.643678 systemd-networkd[1422]: cali0df94669301: Gained IPv6LL Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.520 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0 calico-apiserver-7bc859bc98- calico-apiserver 845a4208-e218-43c1-932f-f50e27d32bf1 853 0 2025-12-12 18:44:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc859bc98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal calico-apiserver-7bc859bc98-d98w5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic85ad1719e4 [] [] }} ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.520 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.584 [INFO][4657] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" HandleID="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.584 [INFO][4657] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" HandleID="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", "pod":"calico-apiserver-7bc859bc98-d98w5", "timestamp":"2025-12-12 18:44:49.584678753 +0000 UTC"}, Hostname:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.584 [INFO][4657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.585 [INFO][4657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.585 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal' Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.597 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.603 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.612 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.614 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.617 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.192/26 host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.617 [INFO][4657] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.110.192/26 handle="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.619 [INFO][4657] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6 Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.624 [INFO][4657] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.110.192/26 handle="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.631 [INFO][4657] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.110.200/26] block=192.168.110.192/26 handle="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.631 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.200/26] handle="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" host="ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal" Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.632 [INFO][4657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:44:49.672646 containerd[1509]: 2025-12-12 18:44:49.632 [INFO][4657] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.110.200/26] IPv6=[] ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" HandleID="k8s-pod-network.b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Workload="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.675942 containerd[1509]: 2025-12-12 18:44:49.634 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0", GenerateName:"calico-apiserver-7bc859bc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"845a4208-e218-43c1-932f-f50e27d32bf1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc859bc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7bc859bc98-d98w5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic85ad1719e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:49.675942 containerd[1509]: 2025-12-12 18:44:49.634 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.200/32] ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.675942 containerd[1509]: 2025-12-12 18:44:49.634 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic85ad1719e4 ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.675942 containerd[1509]: 2025-12-12 18:44:49.647 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.675942 containerd[1509]: 2025-12-12 18:44:49.647 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0", GenerateName:"calico-apiserver-7bc859bc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"845a4208-e218-43c1-932f-f50e27d32bf1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc859bc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-15c7c9b17960997a143a.c.flatcar-212911.internal", ContainerID:"b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6", Pod:"calico-apiserver-7bc859bc98-d98w5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic85ad1719e4", MAC:"22:e0:8f:4d:ba:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:44:49.675942 containerd[1509]: 2025-12-12 18:44:49.669 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc859bc98-d98w5" WorkloadEndpoint="ci--4459--2--2--15c7c9b17960997a143a.c.flatcar--212911.internal-k8s-calico--apiserver--7bc859bc98--d98w5-eth0" Dec 12 18:44:49.733672 containerd[1509]: time="2025-12-12T18:44:49.732547722Z" level=info msg="connecting to shim b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6" address="unix:///run/containerd/s/c79d9de4f41118452641c6e156f1f81af062f4684554466a1548a693165e1811" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:44:49.787593 systemd[1]: Started cri-containerd-b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6.scope - libcontainer container b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6. Dec 12 18:44:49.808688 kubelet[2762]: E1212 18:44:49.808630 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:44:49.809085 kubelet[2762]: E1212 18:44:49.808630 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:44:49.851308 kubelet[2762]: I1212 18:44:49.851252 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2h7p4" podStartSLOduration=45.851231211 podStartE2EDuration="45.851231211s" podCreationTimestamp="2025-12-12 18:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:44:49.824898181 +0000 UTC m=+50.648657933" watchObservedRunningTime="2025-12-12 18:44:49.851231211 +0000 UTC m=+50.674990964" Dec 12 18:44:49.950534 containerd[1509]: time="2025-12-12T18:44:49.950485331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc859bc98-d98w5,Uid:845a4208-e218-43c1-932f-f50e27d32bf1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b45d51467b62a26c25e54602a605770cbdcf899fd0b9a6b46bc83aa745711ef6\"" Dec 12 18:44:49.955264 containerd[1509]: time="2025-12-12T18:44:49.955228357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:44:50.114255 containerd[1509]: time="2025-12-12T18:44:50.114204466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:50.115616 containerd[1509]: time="2025-12-12T18:44:50.115565124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:44:50.115847 containerd[1509]: time="2025-12-12T18:44:50.115674072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:50.116276 kubelet[2762]: E1212 18:44:50.115826 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:50.116276 kubelet[2762]: E1212 18:44:50.115880 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:50.116276 kubelet[2762]: E1212 18:44:50.116009 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7bc859bc98-d98w5_calico-apiserver(845a4208-e218-43c1-932f-f50e27d32bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:50.116276 kubelet[2762]: E1212 18:44:50.116057 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:44:50.603749 systemd-networkd[1422]: cali1d02e47e280: Gained IPv6LL Dec 12 18:44:50.731656 systemd-networkd[1422]: calic85ad1719e4: Gained IPv6LL Dec 12 18:44:50.809250 kubelet[2762]: E1212 18:44:50.808783 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:44:50.811519 kubelet[2762]: E1212 18:44:50.811142 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:44:50.859576 systemd-networkd[1422]: calie5c0d42f287: Gained IPv6LL Dec 12 18:44:51.811562 kubelet[2762]: E1212 18:44:51.811391 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:44:52.864040 ntpd[1644]: Listen normally on 6 vxlan.calico 192.168.110.192:123 Dec 12 18:44:52.864192 ntpd[1644]: Listen normally on 7 cali46423bda393 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 6 vxlan.calico 192.168.110.192:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 7 cali46423bda393 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 8 vxlan.calico [fe80::64fc:2eff:feec:5bcd%5]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 9 calie995bdff64e [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 10 cali66f5bd1a7ba [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 11 cali96af04f42d4 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 12 cali0df94669301 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 13 cali1d02e47e280 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 14 calie5c0d42f287 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 18:44:52.864873 ntpd[1644]: 12 Dec 18:44:52 ntpd[1644]: Listen normally on 15 calic85ad1719e4 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 18:44:52.864239 ntpd[1644]: Listen normally on 8 vxlan.calico [fe80::64fc:2eff:feec:5bcd%5]:123 Dec 12 18:44:52.864281 ntpd[1644]: Listen normally on 9 calie995bdff64e [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 18:44:52.864328 ntpd[1644]: Listen normally on 10 cali66f5bd1a7ba [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 18:44:52.864367 ntpd[1644]: Listen normally on 11 cali96af04f42d4 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 18:44:52.864445 ntpd[1644]: Listen normally on 12 cali0df94669301 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 18:44:52.864491 ntpd[1644]: Listen normally on 13 cali1d02e47e280 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 18:44:52.864531 ntpd[1644]: Listen normally on 14 calie5c0d42f287 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 18:44:52.864573 ntpd[1644]: Listen normally on 15 calic85ad1719e4 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 18:44:56.439877 containerd[1509]: time="2025-12-12T18:44:56.439791462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:44:56.596467 containerd[1509]: time="2025-12-12T18:44:56.596364673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:56.597761 containerd[1509]: time="2025-12-12T18:44:56.597707473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:44:56.598027 containerd[1509]: time="2025-12-12T18:44:56.597814452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:44:56.598255 kubelet[2762]: E1212 18:44:56.598210 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:56.598775 kubelet[2762]: E1212 18:44:56.598266 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:56.598775 kubelet[2762]: E1212 18:44:56.598367 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-584dfb5d5c-klpbl_calico-system(07393087-27ea-4193-97b0-830a271e2225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:56.600660 containerd[1509]: time="2025-12-12T18:44:56.600622469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:44:56.663280 systemd[1]: Started sshd@7-10.128.0.44:22-78.128.112.74:55648.service - OpenSSH per-connection server daemon (78.128.112.74:55648). Dec 12 18:44:56.759483 containerd[1509]: time="2025-12-12T18:44:56.759433916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:56.760685 containerd[1509]: time="2025-12-12T18:44:56.760625959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:44:56.760802 containerd[1509]: time="2025-12-12T18:44:56.760721477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:44:56.761318 kubelet[2762]: E1212 18:44:56.760937 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:56.761318 kubelet[2762]: E1212 18:44:56.761003 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:56.761318 kubelet[2762]: E1212 18:44:56.761133 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-584dfb5d5c-klpbl_calico-system(07393087-27ea-4193-97b0-830a271e2225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:56.761547 kubelet[2762]: E1212 18:44:56.761196 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:44:57.288321 sshd[4736]: Invalid user admin from 78.128.112.74 port 55648 Dec 12 18:44:57.432202 sshd[4736]: Connection closed by invalid user admin 78.128.112.74 port 55648 [preauth] Dec 12 18:44:57.435858 systemd[1]: sshd@7-10.128.0.44:22-78.128.112.74:55648.service: Deactivated successfully. Dec 12 18:44:57.446812 containerd[1509]: time="2025-12-12T18:44:57.446737995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:44:57.654943 containerd[1509]: time="2025-12-12T18:44:57.654774853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:57.656314 containerd[1509]: time="2025-12-12T18:44:57.656186052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:44:57.656314 containerd[1509]: time="2025-12-12T18:44:57.656227218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:44:57.656585 kubelet[2762]: E1212 18:44:57.656533 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:44:57.657376 kubelet[2762]: E1212 18:44:57.656590 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:44:57.657376 kubelet[2762]: E1212 18:44:57.656690 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:57.659583 containerd[1509]: time="2025-12-12T18:44:57.659543424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:44:57.818621 containerd[1509]: time="2025-12-12T18:44:57.818558469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:57.819882 containerd[1509]: time="2025-12-12T18:44:57.819818791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:44:57.820158 containerd[1509]: time="2025-12-12T18:44:57.819844311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:44:57.820258 kubelet[2762]: E1212 18:44:57.820088 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:44:57.820258 kubelet[2762]: E1212 18:44:57.820143 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:44:57.820258 kubelet[2762]: E1212 18:44:57.820241 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:57.820613 kubelet[2762]: E1212 18:44:57.820306 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:45:00.440008 containerd[1509]: time="2025-12-12T18:45:00.439852088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:45:00.596786 containerd[1509]: time="2025-12-12T18:45:00.596716831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:00.598186 containerd[1509]: time="2025-12-12T18:45:00.598127777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:45:00.598736 containerd[1509]: time="2025-12-12T18:45:00.598238880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:00.598850 kubelet[2762]: E1212 18:45:00.598453 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:45:00.598850 kubelet[2762]: E1212 18:45:00.598507 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:45:00.598850 kubelet[2762]: E1212 18:45:00.598608 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xcgsd_calico-system(366b1ec7-f851-4339-83ca-caf896aa2049): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:00.598850 kubelet[2762]: E1212 18:45:00.598655 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:45:01.441206 containerd[1509]: time="2025-12-12T18:45:01.441156574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:45:01.605181 containerd[1509]: time="2025-12-12T18:45:01.605100474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:01.606479 containerd[1509]: time="2025-12-12T18:45:01.606427096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:45:01.606479 containerd[1509]: time="2025-12-12T18:45:01.606431180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:45:01.606889 kubelet[2762]: E1212 18:45:01.606816 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:45:01.606889 kubelet[2762]: E1212 18:45:01.606871 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:45:01.607592 kubelet[2762]: E1212 18:45:01.606971 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5587cf7fbb-rjpxw_calico-system(c6711621-2459-4031-98bb-2eedd5c212f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:01.607592 kubelet[2762]: E1212 18:45:01.607021 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:45:03.441038 containerd[1509]: time="2025-12-12T18:45:03.440984568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:45:03.603594 containerd[1509]: time="2025-12-12T18:45:03.603542298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:03.604964 containerd[1509]: time="2025-12-12T18:45:03.604837765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:45:03.604964 containerd[1509]: time="2025-12-12T18:45:03.604888292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:03.605189 kubelet[2762]: E1212 18:45:03.605145 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:03.605701 kubelet[2762]: E1212 18:45:03.605196 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:03.605701 kubelet[2762]: E1212 18:45:03.605284 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7bc859bc98-p9lx6_calico-apiserver(7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:03.605701 kubelet[2762]: E1212 18:45:03.605332 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:45:04.440440 containerd[1509]: time="2025-12-12T18:45:04.440122381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:45:04.599848 containerd[1509]: time="2025-12-12T18:45:04.599758729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:04.601378 containerd[1509]: time="2025-12-12T18:45:04.601209308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:45:04.601607 containerd[1509]: time="2025-12-12T18:45:04.601317616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:04.601903 kubelet[2762]: E1212 18:45:04.601815 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:04.601903 kubelet[2762]: E1212 18:45:04.601872 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:04.602520 kubelet[2762]: E1212 18:45:04.602421 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7bc859bc98-d98w5_calico-apiserver(845a4208-e218-43c1-932f-f50e27d32bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:04.603100 kubelet[2762]: E1212 18:45:04.603024 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:45:07.309618 systemd[1]: Started sshd@8-10.128.0.44:22-147.75.109.163:39566.service - OpenSSH per-connection server daemon (147.75.109.163:39566). Dec 12 18:45:07.619883 sshd[4755]: Accepted publickey for core from 147.75.109.163 port 39566 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:07.621734 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:07.628745 systemd-logind[1490]: New session 8 of user core. Dec 12 18:45:07.634613 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:45:07.924373 sshd[4761]: Connection closed by 147.75.109.163 port 39566 Dec 12 18:45:07.925352 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:07.932043 systemd[1]: sshd@8-10.128.0.44:22-147.75.109.163:39566.service: Deactivated successfully. Dec 12 18:45:07.935156 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:45:07.937914 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:45:07.940028 systemd-logind[1490]: Removed session 8. Dec 12 18:45:09.443754 kubelet[2762]: E1212 18:45:09.443677 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:45:11.443226 kubelet[2762]: E1212 18:45:11.442858 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:45:12.978744 systemd[1]: Started sshd@9-10.128.0.44:22-147.75.109.163:49870.service - OpenSSH per-connection server daemon (147.75.109.163:49870). Dec 12 18:45:13.286836 sshd[4776]: Accepted publickey for core from 147.75.109.163 port 49870 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:13.288497 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:13.294482 systemd-logind[1490]: New session 9 of user core. Dec 12 18:45:13.301615 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:45:13.441416 kubelet[2762]: E1212 18:45:13.441329 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:45:13.446123 kubelet[2762]: E1212 18:45:13.446006 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:45:13.602828 sshd[4779]: Connection closed by 147.75.109.163 port 49870 Dec 12 18:45:13.604043 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:13.610638 systemd[1]: sshd@9-10.128.0.44:22-147.75.109.163:49870.service: Deactivated successfully. Dec 12 18:45:13.613907 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:45:13.615246 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:45:13.617924 systemd-logind[1490]: Removed session 9. Dec 12 18:45:14.439789 kubelet[2762]: E1212 18:45:14.439720 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:45:17.441680 kubelet[2762]: E1212 18:45:17.441571 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:45:18.661094 systemd[1]: Started sshd@10-10.128.0.44:22-147.75.109.163:49878.service - OpenSSH per-connection server daemon (147.75.109.163:49878). Dec 12 18:45:18.964739 sshd[4815]: Accepted publickey for core from 147.75.109.163 port 49878 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:18.966632 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:18.974367 systemd-logind[1490]: New session 10 of user core. Dec 12 18:45:18.979639 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:45:19.262074 sshd[4818]: Connection closed by 147.75.109.163 port 49878 Dec 12 18:45:19.262990 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:19.268957 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:45:19.269142 systemd[1]: sshd@10-10.128.0.44:22-147.75.109.163:49878.service: Deactivated successfully. Dec 12 18:45:19.272793 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:45:19.276689 systemd-logind[1490]: Removed session 10. Dec 12 18:45:19.315881 systemd[1]: Started sshd@11-10.128.0.44:22-147.75.109.163:49884.service - OpenSSH per-connection server daemon (147.75.109.163:49884). Dec 12 18:45:19.626972 sshd[4831]: Accepted publickey for core from 147.75.109.163 port 49884 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:19.628905 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:19.637126 systemd-logind[1490]: New session 11 of user core. Dec 12 18:45:19.646614 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:45:20.003004 sshd[4834]: Connection closed by 147.75.109.163 port 49884 Dec 12 18:45:20.003883 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:20.011740 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:45:20.012689 systemd[1]: sshd@11-10.128.0.44:22-147.75.109.163:49884.service: Deactivated successfully. Dec 12 18:45:20.016352 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:45:20.021333 systemd-logind[1490]: Removed session 11. Dec 12 18:45:20.062573 systemd[1]: Started sshd@12-10.128.0.44:22-147.75.109.163:49894.service - OpenSSH per-connection server daemon (147.75.109.163:49894). Dec 12 18:45:20.377122 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 49894 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:20.377915 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:20.386506 systemd-logind[1490]: New session 12 of user core. Dec 12 18:45:20.397728 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:45:20.682839 sshd[4848]: Connection closed by 147.75.109.163 port 49894 Dec 12 18:45:20.683679 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:20.689890 systemd[1]: sshd@12-10.128.0.44:22-147.75.109.163:49894.service: Deactivated successfully. Dec 12 18:45:20.692950 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:45:20.695539 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:45:20.698646 systemd-logind[1490]: Removed session 12. Dec 12 18:45:24.444076 containerd[1509]: time="2025-12-12T18:45:24.443686099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:45:24.631957 containerd[1509]: time="2025-12-12T18:45:24.631900873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:24.633951 containerd[1509]: time="2025-12-12T18:45:24.633888370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:45:24.634083 containerd[1509]: time="2025-12-12T18:45:24.634006915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:45:24.634236 kubelet[2762]: E1212 18:45:24.634186 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:45:24.634727 kubelet[2762]: E1212 18:45:24.634248 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:45:24.634727 kubelet[2762]: E1212 18:45:24.634343 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-584dfb5d5c-klpbl_calico-system(07393087-27ea-4193-97b0-830a271e2225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:24.642677 containerd[1509]: time="2025-12-12T18:45:24.642641712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:45:24.806422 containerd[1509]: time="2025-12-12T18:45:24.806354057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:24.807654 containerd[1509]: time="2025-12-12T18:45:24.807594695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:45:24.807923 containerd[1509]: time="2025-12-12T18:45:24.807708969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:45:24.808012 kubelet[2762]: E1212 18:45:24.807912 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:45:24.808012 kubelet[2762]: E1212 18:45:24.807977 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:45:24.808619 kubelet[2762]: E1212 18:45:24.808121 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-584dfb5d5c-klpbl_calico-system(07393087-27ea-4193-97b0-830a271e2225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:24.808740 kubelet[2762]: E1212 18:45:24.808666 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:45:25.444935 containerd[1509]: time="2025-12-12T18:45:25.443658877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:45:25.606572 containerd[1509]: time="2025-12-12T18:45:25.606518975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:25.609427 containerd[1509]: time="2025-12-12T18:45:25.607990469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:45:25.609427 containerd[1509]: time="2025-12-12T18:45:25.608107408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:25.610085 kubelet[2762]: E1212 18:45:25.609948 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:25.610085 kubelet[2762]: E1212 18:45:25.610032 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:25.611082 kubelet[2762]: E1212 18:45:25.610667 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7bc859bc98-p9lx6_calico-apiserver(7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:25.611555 kubelet[2762]: E1212 18:45:25.611414 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:45:25.612150 containerd[1509]: time="2025-12-12T18:45:25.611868500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:45:25.745839 systemd[1]: Started sshd@13-10.128.0.44:22-147.75.109.163:56104.service - OpenSSH per-connection server daemon (147.75.109.163:56104). Dec 12 18:45:25.774958 containerd[1509]: time="2025-12-12T18:45:25.774912441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:25.776342 containerd[1509]: time="2025-12-12T18:45:25.776290898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:45:25.777480 containerd[1509]: time="2025-12-12T18:45:25.777441601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:45:25.777824 kubelet[2762]: E1212 18:45:25.777772 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:45:25.778247 kubelet[2762]: E1212 18:45:25.777851 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:45:25.778247 kubelet[2762]: E1212 18:45:25.777970 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:25.783876 containerd[1509]: time="2025-12-12T18:45:25.782521758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:45:25.954538 containerd[1509]: time="2025-12-12T18:45:25.954480103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:25.955906 containerd[1509]: time="2025-12-12T18:45:25.955843867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:45:25.956068 containerd[1509]: time="2025-12-12T18:45:25.955941287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:45:25.956200 kubelet[2762]: E1212 18:45:25.956150 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:45:25.956357 kubelet[2762]: E1212 18:45:25.956205 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:45:25.956980 kubelet[2762]: E1212 18:45:25.956345 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-q4rn9_calico-system(8b56ee9b-eb0e-4a48-b289-fe72c1940fc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:25.957145 kubelet[2762]: E1212 18:45:25.957013 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:45:26.084632 sshd[4870]: Accepted publickey for core from 147.75.109.163 port 56104 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:26.085805 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:26.093608 systemd-logind[1490]: New session 13 of user core. Dec 12 18:45:26.099638 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:45:26.378522 sshd[4874]: Connection closed by 147.75.109.163 port 56104 Dec 12 18:45:26.379726 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:26.384601 systemd[1]: sshd@13-10.128.0.44:22-147.75.109.163:56104.service: Deactivated successfully. Dec 12 18:45:26.388089 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:45:26.390374 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:45:26.392672 systemd-logind[1490]: Removed session 13. Dec 12 18:45:28.441639 containerd[1509]: time="2025-12-12T18:45:28.441587729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:45:28.636053 containerd[1509]: time="2025-12-12T18:45:28.635972549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:28.637383 containerd[1509]: time="2025-12-12T18:45:28.637250956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:45:28.637383 containerd[1509]: time="2025-12-12T18:45:28.637313863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:28.637648 kubelet[2762]: E1212 18:45:28.637596 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:45:28.638348 kubelet[2762]: E1212 18:45:28.637659 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:45:28.638348 kubelet[2762]: E1212 18:45:28.637929 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xcgsd_calico-system(366b1ec7-f851-4339-83ca-caf896aa2049): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:28.638348 kubelet[2762]: E1212 18:45:28.637982 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:45:28.639512 containerd[1509]: time="2025-12-12T18:45:28.639200317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:45:28.798019 containerd[1509]: time="2025-12-12T18:45:28.797960085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:28.799498 containerd[1509]: time="2025-12-12T18:45:28.799339671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:45:28.799498 containerd[1509]: time="2025-12-12T18:45:28.799392769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:45:28.799928 kubelet[2762]: E1212 18:45:28.799857 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:45:28.799928 kubelet[2762]: E1212 18:45:28.799915 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:45:28.800109 kubelet[2762]: E1212 18:45:28.800022 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5587cf7fbb-rjpxw_calico-system(c6711621-2459-4031-98bb-2eedd5c212f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:28.800109 kubelet[2762]: E1212 18:45:28.800070 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:45:30.440446 containerd[1509]: time="2025-12-12T18:45:30.439974051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:45:30.608748 containerd[1509]: time="2025-12-12T18:45:30.608691659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:30.610143 containerd[1509]: time="2025-12-12T18:45:30.610081686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:45:30.610513 containerd[1509]: time="2025-12-12T18:45:30.610108304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:30.610583 kubelet[2762]: E1212 18:45:30.610343 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:30.610583 kubelet[2762]: E1212 18:45:30.610420 2762 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:30.610583 kubelet[2762]: E1212 18:45:30.610523 2762 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7bc859bc98-d98w5_calico-apiserver(845a4208-e218-43c1-932f-f50e27d32bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:30.611640 kubelet[2762]: E1212 18:45:30.610574 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:45:31.438858 systemd[1]: Started sshd@14-10.128.0.44:22-147.75.109.163:56106.service - OpenSSH per-connection server daemon (147.75.109.163:56106). Dec 12 18:45:31.751651 sshd[4888]: Accepted publickey for core from 147.75.109.163 port 56106 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:31.753539 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:31.760472 systemd-logind[1490]: New session 14 of user core. Dec 12 18:45:31.767609 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:45:32.051168 sshd[4891]: Connection closed by 147.75.109.163 port 56106 Dec 12 18:45:32.052796 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:32.058604 systemd[1]: sshd@14-10.128.0.44:22-147.75.109.163:56106.service: Deactivated successfully. Dec 12 18:45:32.062558 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:45:32.064280 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:45:32.066339 systemd-logind[1490]: Removed session 14. Dec 12 18:45:36.441167 kubelet[2762]: E1212 18:45:36.440706 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:45:37.113907 systemd[1]: Started sshd@15-10.128.0.44:22-147.75.109.163:54036.service - OpenSSH per-connection server daemon (147.75.109.163:54036). Dec 12 18:45:37.456230 sshd[4906]: Accepted publickey for core from 147.75.109.163 port 54036 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:37.459587 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:37.472045 systemd-logind[1490]: New session 15 of user core. Dec 12 18:45:37.480743 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:45:37.834911 sshd[4909]: Connection closed by 147.75.109.163 port 54036 Dec 12 18:45:37.836062 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:37.843563 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:45:37.844659 systemd[1]: sshd@15-10.128.0.44:22-147.75.109.163:54036.service: Deactivated successfully. Dec 12 18:45:37.849047 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:45:37.854832 systemd-logind[1490]: Removed session 15. Dec 12 18:45:40.444099 kubelet[2762]: E1212 18:45:40.444025 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:45:41.442153 kubelet[2762]: E1212 18:45:41.442071 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:45:41.447909 kubelet[2762]: E1212 18:45:41.447838 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:45:42.441805 kubelet[2762]: E1212 18:45:42.440984 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:45:42.441805 kubelet[2762]: E1212 18:45:42.441572 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:45:42.893294 systemd[1]: Started sshd@16-10.128.0.44:22-147.75.109.163:58416.service - OpenSSH per-connection server daemon (147.75.109.163:58416). Dec 12 18:45:43.213902 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 58416 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:43.217339 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:43.228073 systemd-logind[1490]: New session 16 of user core. Dec 12 18:45:43.236806 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:45:43.558390 sshd[4924]: Connection closed by 147.75.109.163 port 58416 Dec 12 18:45:43.561493 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:43.571730 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:45:43.572898 systemd[1]: sshd@16-10.128.0.44:22-147.75.109.163:58416.service: Deactivated successfully. Dec 12 18:45:43.578118 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:45:43.583104 systemd-logind[1490]: Removed session 16. Dec 12 18:45:43.618747 systemd[1]: Started sshd@17-10.128.0.44:22-147.75.109.163:58418.service - OpenSSH per-connection server daemon (147.75.109.163:58418). Dec 12 18:45:43.943954 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 58418 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:43.946273 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:43.960345 systemd-logind[1490]: New session 17 of user core. Dec 12 18:45:43.963647 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:45:44.447425 sshd[4961]: Connection closed by 147.75.109.163 port 58418 Dec 12 18:45:44.449676 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:44.456878 systemd[1]: sshd@17-10.128.0.44:22-147.75.109.163:58418.service: Deactivated successfully. Dec 12 18:45:44.462203 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:45:44.464478 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:45:44.470389 systemd-logind[1490]: Removed session 17. Dec 12 18:45:44.505859 systemd[1]: Started sshd@18-10.128.0.44:22-147.75.109.163:58426.service - OpenSSH per-connection server daemon (147.75.109.163:58426). Dec 12 18:45:44.829122 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 58426 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:44.831921 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:44.844042 systemd-logind[1490]: New session 18 of user core. Dec 12 18:45:44.849662 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:45:46.108163 sshd[4974]: Connection closed by 147.75.109.163 port 58426 Dec 12 18:45:46.108698 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:46.125020 systemd[1]: sshd@18-10.128.0.44:22-147.75.109.163:58426.service: Deactivated successfully. Dec 12 18:45:46.131618 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:45:46.134484 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:45:46.140461 systemd-logind[1490]: Removed session 18. Dec 12 18:45:46.169731 systemd[1]: Started sshd@19-10.128.0.44:22-147.75.109.163:58428.service - OpenSSH per-connection server daemon (147.75.109.163:58428). Dec 12 18:45:46.503929 sshd[4989]: Accepted publickey for core from 147.75.109.163 port 58428 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:46.507118 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:46.516326 systemd-logind[1490]: New session 19 of user core. Dec 12 18:45:46.524635 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:45:47.108498 sshd[4993]: Connection closed by 147.75.109.163 port 58428 Dec 12 18:45:47.111570 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:47.123179 systemd[1]: sshd@19-10.128.0.44:22-147.75.109.163:58428.service: Deactivated successfully. Dec 12 18:45:47.130196 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:45:47.137208 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:45:47.140597 systemd-logind[1490]: Removed session 19. Dec 12 18:45:47.168737 systemd[1]: Started sshd@20-10.128.0.44:22-147.75.109.163:58442.service - OpenSSH per-connection server daemon (147.75.109.163:58442). Dec 12 18:45:47.489668 sshd[5003]: Accepted publickey for core from 147.75.109.163 port 58442 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:47.492755 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:47.503219 systemd-logind[1490]: New session 20 of user core. Dec 12 18:45:47.513322 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:45:47.913770 sshd[5008]: Connection closed by 147.75.109.163 port 58442 Dec 12 18:45:47.914371 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:47.928073 systemd[1]: sshd@20-10.128.0.44:22-147.75.109.163:58442.service: Deactivated successfully. Dec 12 18:45:47.928574 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:45:47.933783 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:45:47.938882 systemd-logind[1490]: Removed session 20. Dec 12 18:45:48.441248 kubelet[2762]: E1212 18:45:48.441159 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:45:51.444285 kubelet[2762]: E1212 18:45:51.444212 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:45:52.440871 kubelet[2762]: E1212 18:45:52.440380 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:45:52.970800 systemd[1]: Started sshd@21-10.128.0.44:22-147.75.109.163:41544.service - OpenSSH per-connection server daemon (147.75.109.163:41544). Dec 12 18:45:53.305074 sshd[5023]: Accepted publickey for core from 147.75.109.163 port 41544 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:53.308429 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:53.317058 systemd-logind[1490]: New session 21 of user core. Dec 12 18:45:53.326119 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:45:53.444069 kubelet[2762]: E1212 18:45:53.443860 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:45:53.447863 kubelet[2762]: E1212 18:45:53.447746 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:45:53.701357 sshd[5026]: Connection closed by 147.75.109.163 port 41544 Dec 12 18:45:53.703729 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:53.713706 systemd[1]: sshd@21-10.128.0.44:22-147.75.109.163:41544.service: Deactivated successfully. Dec 12 18:45:53.722653 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:45:53.725657 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:45:53.730530 systemd-logind[1490]: Removed session 21. Dec 12 18:45:56.439478 kubelet[2762]: E1212 18:45:56.439343 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1" Dec 12 18:45:58.763541 systemd[1]: Started sshd@22-10.128.0.44:22-147.75.109.163:41552.service - OpenSSH per-connection server daemon (147.75.109.163:41552). Dec 12 18:45:59.109636 sshd[5038]: Accepted publickey for core from 147.75.109.163 port 41552 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:45:59.113063 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:59.123894 systemd-logind[1490]: New session 22 of user core. Dec 12 18:45:59.130830 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:45:59.480525 sshd[5041]: Connection closed by 147.75.109.163 port 41552 Dec 12 18:45:59.481521 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:59.493658 systemd[1]: sshd@22-10.128.0.44:22-147.75.109.163:41552.service: Deactivated successfully. Dec 12 18:45:59.503659 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:45:59.507868 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:45:59.511023 systemd-logind[1490]: Removed session 22. Dec 12 18:46:02.440383 kubelet[2762]: E1212 18:46:02.440297 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-p9lx6" podUID="7a3fffb7-4ae7-40ae-ba0a-4502dfe78f4a" Dec 12 18:46:03.442149 kubelet[2762]: E1212 18:46:03.441991 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-584dfb5d5c-klpbl" podUID="07393087-27ea-4193-97b0-830a271e2225" Dec 12 18:46:04.441704 kubelet[2762]: E1212 18:46:04.441637 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5587cf7fbb-rjpxw" podUID="c6711621-2459-4031-98bb-2eedd5c212f5" Dec 12 18:46:04.444300 kubelet[2762]: E1212 18:46:04.442353 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q4rn9" podUID="8b56ee9b-eb0e-4a48-b289-fe72c1940fc8" Dec 12 18:46:04.538188 systemd[1]: Started sshd@23-10.128.0.44:22-147.75.109.163:45456.service - OpenSSH per-connection server daemon (147.75.109.163:45456). Dec 12 18:46:04.867432 sshd[5061]: Accepted publickey for core from 147.75.109.163 port 45456 ssh2: RSA SHA256:B6VDh7BGfKzZPlvJUAsb3PG3jcaWKb5c7hp5dQ2EGeo Dec 12 18:46:04.869206 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:04.878347 systemd-logind[1490]: New session 23 of user core. Dec 12 18:46:04.887821 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:46:05.210607 sshd[5064]: Connection closed by 147.75.109.163 port 45456 Dec 12 18:46:05.213002 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:05.222298 systemd[1]: sshd@23-10.128.0.44:22-147.75.109.163:45456.service: Deactivated successfully. Dec 12 18:46:05.227023 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:46:05.229701 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:46:05.233066 systemd-logind[1490]: Removed session 23. Dec 12 18:46:07.440148 kubelet[2762]: E1212 18:46:07.440035 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xcgsd" podUID="366b1ec7-f851-4339-83ca-caf896aa2049" Dec 12 18:46:07.442985 kubelet[2762]: E1212 18:46:07.441626 2762 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bc859bc98-d98w5" podUID="845a4208-e218-43c1-932f-f50e27d32bf1"